diffusion / DOCS.md
adamelliotfields's picture
ControlNet
98afd85 verified

A newer version of the Gradio SDK is available: 5.6.0

Upgrade

Diffusion ZERO

TL;DR: Enter a prompt or roll the 🎲 and press Generate.

Prompting

Positive and negative prompts are embedded by Compel for weighting. See syntax features to learn more.

Use + or - to increase the weight of a token. The weight grows exponentially when chained. For example, blue+ means 1.1x more attention is given to blue, while blue++ means 1.1^2 more, and so on. The same applies to -.

For groups of tokens, wrap them in parentheses and multiply by a float between 0 and 2. For example, a (birthday cake)1.3 on a table will increase the weight of both birthday and cake by 1.3x. This also means the entire scene will be more birthday-like, not just the cake. To counteract this, you can use - inside the parentheses on specific tokens, e.g., a (birthday-- cake)1.3, to reduce the birthday aspect.

This is the same syntax used in InvokeAI and it differs from AUTOMATIC1111:

Compel AUTOMATIC1111
blue++ ((blue))
blue-- [[blue]]
(blue)1.2 (blue:1.2)
(blue)0.8 (blue:0.8)

Arrays

Arrays allow you to generate multiple different images from a single prompt. For example, an adult [[blonde,brunette]] [[man,woman]] will expand into 4 different prompts. This implementation was inspired by Fooocus.

NB: Make sure to set Images to the number of images you want to generate. Otherwise, only the first prompt will be used.

Models

Each model checkpoint has a different aesthetic:

LoRA

Apply up to 2 LoRA (low-rank adaptation) adapters with adjustable strength:

NB: The trigger words are automatically appended to the positive prompt for you.

Embeddings

Select one or more textual inversion embeddings:

NB: The trigger token is automatically appended to the negative prompt for you.

Styles

Styles are prompt templates that wrap your positive and negative prompts. They were originally derived from the twri/sdxl_prompt_styler Comfy node, but have since been entirely rewritten.

Start by framing a simple subject like portrait of a young adult woman or landscape of a mountain range and experiment.

Anime

The Anime: * styles work the best with Dreamshaper. When using the anime-specific Anything model, you should use the Anime: Anything style with the following settings:

  • Scheduler: DEIS 2M or DPM++ 2M
  • Guidance: 10
  • Steps: 50

You subject should be a few simple tokens like girl, brunette, blue eyes, armor, nebula, celestial. Experiment with Clip Skip and Karras. Finish with the Perfection Style LoRA on a moderate setting and upscale.

Scale

Rescale up to 4x using Real-ESRGAN with weights from ai-forever. Necessary for high-resolution images.

Image-to-Image

The 🖼️ Image tab enables the image-to-image and IP-Adapter pipelines.

Strength

Denoising strength is essentially how much the generation will differ from the input image. A value of 0 will be identical to the original, while 1 will be a completely new image. You may want to also increase the number of inference steps. Only applies to the image-to-image input.

IP-Adapter

In an image-to-image pipeline, the input image is used as the initial latent. With IP-Adapter, the input image is processed by a separate image encoder and the encoded features are used as conditioning along with the text prompt.

For capturing faces, enable IP-Adapter Face to use the full-face model. You should use an input image that is mostly a face and it should be high quality. You can generate fake portraits with Realistic Vision to experiment. Note that you'll never get true identity preservation without an advanced pipeline like InstantID, which combines many techniques.

ControlNet

The 🎮 Control tab enables the ControlNet pipelines. Read the Diffusers docs to learn more.

Annotators

In ControlNet, the input image is a feature map produced by an annotator. These are computer vision models used for tasks like edge detection and pose estimation. ControlNet models are trained to understand these feature maps.

NB: Control images will be automatically resized to the nearest multiple of 64 (e.g., 513 -> 512).

Advanced

DeepCache

DeepCache caches lower UNet layers and reuses them every Interval steps. Trade quality for speed:

  • 1: no caching (default)
  • 2: more quality
  • 3: balanced
  • 4: more speed

FreeU

FreeU re-weights the contributions sourced from the UNet’s skip connections and backbone feature maps. Can sometimes improve image quality.

Clip Skip

When enabled, the last CLIP layer is skipped. Can sometimes improve image quality.

Tiny VAE

Enable madebyollin/taesd for near-instant latent decoding with a minor loss in detail. Useful for development.