Spaces:
Running
on
Zero
Running
on
Zero
adamelliotfields
commited on
Commit
•
effc0a0
1
Parent(s):
7a7cda5
Improve navigation
Browse files- DOCS.md +25 -27
- app.css +5 -18
- app.py +228 -326
- partials/intro.html +3 -14
DOCS.md
CHANGED
@@ -1,14 +1,14 @@
|
|
1 |
-
|
2 |
|
3 |
TL;DR: Enter a prompt or roll the `🎲` and press `Generate`.
|
4 |
|
5 |
-
|
6 |
|
7 |
Positive and negative prompts are embedded by [Compel](https://github.com/damian0815/compel) for weighting. See [syntax features](https://github.com/damian0815/compel/blob/main/doc/syntax.md) to learn more.
|
8 |
|
9 |
Use `+` or `-` to increase the weight of a token. The weight grows exponentially when chained. For example, `blue+` means 1.1x more attention is given to `blue`, while `blue++` means 1.1^2 more, and so on. The same applies to `-`.
|
10 |
|
11 |
-
|
12 |
|
13 |
This is the same syntax used in [InvokeAI](https://invoke-ai.github.io/InvokeAI/features/PROMPTS/) and it differs from AUTOMATIC1111:
|
14 |
|
@@ -19,13 +19,13 @@ This is the same syntax used in [InvokeAI](https://invoke-ai.github.io/InvokeAI/
|
|
19 |
| `(blue)1.2` | `(blue:1.2)` |
|
20 |
| `(blue)0.8` | `(blue:0.8)` |
|
21 |
|
22 |
-
|
23 |
|
24 |
Arrays allow you to generate multiple different images from a single prompt. For example, `an adult [[blonde,brunette]] [[man,woman]]` will expand into **4** different prompts. This implementation was inspired by [Fooocus](https://github.com/lllyasviel/Fooocus/pull/1503).
|
25 |
|
26 |
> NB: Make sure to set `Images` to the number of images you want to generate. Otherwise, only the first prompt will be used.
|
27 |
|
28 |
-
|
29 |
|
30 |
Each model checkpoint has a different aesthetic:
|
31 |
|
@@ -38,7 +38,7 @@ Each model checkpoint has a different aesthetic:
|
|
38 |
* [SG161222/Realistic_Vision_V5](https://huggingface.co/SG161222/Realistic_Vision_V5.1_noVAE): realistic
|
39 |
* [XpucT/Deliberate_v6](https://huggingface.co/XpucT/Deliberate): general purpose stylized
|
40 |
|
41 |
-
|
42 |
|
43 |
Apply up to 2 LoRA (low-rank adaptation) adapters with adjustable strength:
|
44 |
|
@@ -47,7 +47,7 @@ Apply up to 2 LoRA (low-rank adaptation) adapters with adjustable strength:
|
|
47 |
|
48 |
> NB: The trigger words are automatically appended to the positive prompt for you.
|
49 |
|
50 |
-
|
51 |
|
52 |
Select one or more [textual inversion](https://huggingface.co/docs/diffusers/en/using-diffusers/textual_inversion_inference) embeddings:
|
53 |
|
@@ -57,13 +57,13 @@ Select one or more [textual inversion](https://huggingface.co/docs/diffusers/en/
|
|
57 |
|
58 |
> NB: The trigger token is automatically appended to the negative prompt for you.
|
59 |
|
60 |
-
|
61 |
|
62 |
[Styles](https://huggingface.co/spaces/adamelliotfields/diffusion/blob/main/data/styles.json) are prompt templates that wrap your positive and negative prompts. They were originally derived from the [twri/sdxl_prompt_styler](https://github.com/twri/sdxl_prompt_styler) Comfy node, but have since been entirely rewritten.
|
63 |
|
64 |
Start by framing a simple subject like `portrait of a young adult woman` or `landscape of a mountain range` and experiment.
|
65 |
|
66 |
-
|
67 |
|
68 |
The `Anime: *` styles work the best with Dreamshaper. When using the anime-specific Anything model, you should use the `Anime: Anything` style with the following settings:
|
69 |
|
@@ -73,37 +73,35 @@ The `Anime: *` styles work the best with Dreamshaper. When using the anime-speci
|
|
73 |
|
74 |
You subject should be a few simple tokens like `girl, brunette, blue eyes, armor, nebula, celestial`. Experiment with `Clip Skip` and `Karras`. Finish with the `Perfection Style` LoRA on a moderate setting and upscale.
|
75 |
|
76 |
-
|
77 |
|
78 |
Rescale up to 4x using [Real-ESRGAN](https://github.com/xinntao/Real-ESRGAN) with weights from [ai-forever](ai-forever/Real-ESRGAN). Necessary for high-resolution images.
|
79 |
|
80 |
-
|
81 |
|
82 |
The `🖼️ Image` tab enables the image-to-image and IP-Adapter pipelines.
|
83 |
|
84 |
-
|
85 |
|
86 |
-
|
87 |
|
88 |
-
|
89 |
|
90 |
-
|
91 |
-
|
92 |
-
For capturing faces, enable `IP-Adapter Face` to use the full-face model. You should use an input image that is mostly a face and it should be high quality. You can generate fake portraits with Realistic Vision to experiment. Note that you'll never get true identity preservation without an advanced pipeline like [InstantID](https://github.com/instantX-research/InstantID), which combines many techniques.
|
93 |
|
94 |
-
|
95 |
|
96 |
-
|
97 |
|
98 |
-
|
99 |
|
100 |
-
In
|
101 |
|
102 |
-
|
103 |
|
104 |
-
|
105 |
|
106 |
-
|
107 |
|
108 |
[DeepCache](https://github.com/horseee/DeepCache) caches lower UNet layers and reuses them every `Interval` steps. Trade quality for speed:
|
109 |
* `1`: no caching (default)
|
@@ -111,14 +109,14 @@ In ControlNet, the input image is a feature map produced by an _annotator_. Thes
|
|
111 |
* `3`: balanced
|
112 |
* `4`: more speed
|
113 |
|
114 |
-
|
115 |
|
116 |
[FreeU](https://github.com/ChenyangSi/FreeU) re-weights the contributions sourced from the UNet’s skip connections and backbone feature maps. Can sometimes improve image quality.
|
117 |
|
118 |
-
|
119 |
|
120 |
When enabled, the last CLIP layer is skipped. Can sometimes improve image quality.
|
121 |
|
122 |
-
|
123 |
|
124 |
Enable [madebyollin/taesd](https://github.com/madebyollin/taesd) for near-instant latent decoding with a minor loss in detail. Useful for development.
|
|
|
1 |
+
## Usage
|
2 |
|
3 |
TL;DR: Enter a prompt or roll the `🎲` and press `Generate`.
|
4 |
|
5 |
+
### Prompting
|
6 |
|
7 |
Positive and negative prompts are embedded by [Compel](https://github.com/damian0815/compel) for weighting. See [syntax features](https://github.com/damian0815/compel/blob/main/doc/syntax.md) to learn more.
|
8 |
|
9 |
Use `+` or `-` to increase the weight of a token. The weight grows exponentially when chained. For example, `blue+` means 1.1x more attention is given to `blue`, while `blue++` means 1.1^2 more, and so on. The same applies to `-`.
|
10 |
|
11 |
+
Groups of tokens can be weighted together by wrapping in parantheses and multiplying by a float between 0 and 2. For example, `(masterpiece, best quality)1.2` will increase the weight of both `masterpiece` and `best quality` by 1.2x.
|
12 |
|
13 |
This is the same syntax used in [InvokeAI](https://invoke-ai.github.io/InvokeAI/features/PROMPTS/) and it differs from AUTOMATIC1111:
|
14 |
|
|
|
19 |
| `(blue)1.2` | `(blue:1.2)` |
|
20 |
| `(blue)0.8` | `(blue:0.8)` |
|
21 |
|
22 |
+
#### Arrays
|
23 |
|
24 |
Arrays allow you to generate multiple different images from a single prompt. For example, `an adult [[blonde,brunette]] [[man,woman]]` will expand into **4** different prompts. This implementation was inspired by [Fooocus](https://github.com/lllyasviel/Fooocus/pull/1503).
|
25 |
|
26 |
> NB: Make sure to set `Images` to the number of images you want to generate. Otherwise, only the first prompt will be used.
|
27 |
|
28 |
+
### Models
|
29 |
|
30 |
Each model checkpoint has a different aesthetic:
|
31 |
|
|
|
38 |
* [SG161222/Realistic_Vision_V5](https://huggingface.co/SG161222/Realistic_Vision_V5.1_noVAE): realistic
|
39 |
* [XpucT/Deliberate_v6](https://huggingface.co/XpucT/Deliberate): general purpose stylized
|
40 |
|
41 |
+
### LoRA
|
42 |
|
43 |
Apply up to 2 LoRA (low-rank adaptation) adapters with adjustable strength:
|
44 |
|
|
|
47 |
|
48 |
> NB: The trigger words are automatically appended to the positive prompt for you.
|
49 |
|
50 |
+
### Embeddings
|
51 |
|
52 |
Select one or more [textual inversion](https://huggingface.co/docs/diffusers/en/using-diffusers/textual_inversion_inference) embeddings:
|
53 |
|
|
|
57 |
|
58 |
> NB: The trigger token is automatically appended to the negative prompt for you.
|
59 |
|
60 |
+
### Styles
|
61 |
|
62 |
[Styles](https://huggingface.co/spaces/adamelliotfields/diffusion/blob/main/data/styles.json) are prompt templates that wrap your positive and negative prompts. They were originally derived from the [twri/sdxl_prompt_styler](https://github.com/twri/sdxl_prompt_styler) Comfy node, but have since been entirely rewritten.
|
63 |
|
64 |
Start by framing a simple subject like `portrait of a young adult woman` or `landscape of a mountain range` and experiment.
|
65 |
|
66 |
+
#### Anime
|
67 |
|
68 |
The `Anime: *` styles work the best with Dreamshaper. When using the anime-specific Anything model, you should use the `Anime: Anything` style with the following settings:
|
69 |
|
|
|
73 |
|
74 |
You subject should be a few simple tokens like `girl, brunette, blue eyes, armor, nebula, celestial`. Experiment with `Clip Skip` and `Karras`. Finish with the `Perfection Style` LoRA on a moderate setting and upscale.
|
75 |
|
76 |
+
### Scale
|
77 |
|
78 |
Rescale up to 4x using [Real-ESRGAN](https://github.com/xinntao/Real-ESRGAN) with weights from [ai-forever](ai-forever/Real-ESRGAN). Necessary for high-resolution images.
|
79 |
|
80 |
+
### Image-to-Image
|
81 |
|
82 |
The `🖼️ Image` tab enables the image-to-image and IP-Adapter pipelines.
|
83 |
|
84 |
+
#### Strength
|
85 |
|
86 |
+
Initial image strength (known as _denoising strength_) is essentially how much the generation will differ from the input image. A value of `0` will be identical to the original, while `1` will be a completely new image. You may want to also increase the number of inference steps.
|
87 |
|
88 |
+
> 💡 Denoising strength only applies to the `Initial Image` input; it doesn't affect ControlNet or IP-Adapter.
|
89 |
|
90 |
+
#### ControlNet
|
|
|
|
|
91 |
|
92 |
+
In [ControlNet](https://github.com/lllyasviel/ControlNet), the input image is used to get a feature map from an _annotator_. These are computer vision models used for tasks like edge detection and pose estimation. ControlNet models are trained to understand these feature maps. Read the [Diffusers docs](https://huggingface.co/docs/diffusers/using-diffusers/controlnet) to learn more.
|
93 |
|
94 |
+
Currently, the only annotator available is [Canny](https://huggingface.co/lllyasviel/control_v11p_sd15_canny) (edge detection).
|
95 |
|
96 |
+
#### IP-Adapter
|
97 |
|
98 |
+
In an image-to-image pipeline, the input image is used as the initial latent. With [IP-Adapter](https://github.com/tencent-ailab/IP-Adapter), the input image is processed by a separate image encoder and the encoded features are used as conditioning along with the text prompt.
|
99 |
|
100 |
+
For capturing faces, enable `IP-Adapter Face` to use the full-face model. You should use an input image that is mostly a face and it should be high quality. You can generate fake portraits with Realistic Vision to experiment.
|
101 |
|
102 |
+
### Advanced
|
103 |
|
104 |
+
#### DeepCache
|
105 |
|
106 |
[DeepCache](https://github.com/horseee/DeepCache) caches lower UNet layers and reuses them every `Interval` steps. Trade quality for speed:
|
107 |
* `1`: no caching (default)
|
|
|
109 |
* `3`: balanced
|
110 |
* `4`: more speed
|
111 |
|
112 |
+
#### FreeU
|
113 |
|
114 |
[FreeU](https://github.com/ChenyangSi/FreeU) re-weights the contributions sourced from the UNet’s skip connections and backbone feature maps. Can sometimes improve image quality.
|
115 |
|
116 |
+
#### Clip Skip
|
117 |
|
118 |
When enabled, the last CLIP layer is skipped. Can sometimes improve image quality.
|
119 |
|
120 |
+
#### Tiny VAE
|
121 |
|
122 |
Enable [madebyollin/taesd](https://github.com/madebyollin/taesd) for near-instant latent decoding with a minor loss in detail. Useful for development.
|
app.css
CHANGED
@@ -67,24 +67,6 @@
|
|
67 |
#intro > div > svg:is(.dark *) {
|
68 |
fill: #10b981 !important;
|
69 |
}
|
70 |
-
#intro nav {
|
71 |
-
display: flex;
|
72 |
-
column-gap: 0.5rem;
|
73 |
-
}
|
74 |
-
#intro nav a, #intro nav span {
|
75 |
-
white-space: nowrap;
|
76 |
-
font-family: monospace;
|
77 |
-
}
|
78 |
-
#intro nav span {
|
79 |
-
font-weight: 500;
|
80 |
-
color: var(--body-text-color);
|
81 |
-
}
|
82 |
-
#intro nav a {
|
83 |
-
color: var(--body-text-color-subdued);
|
84 |
-
}
|
85 |
-
#intro nav a:hover {
|
86 |
-
color: var(--body-text-color);
|
87 |
-
}
|
88 |
|
89 |
.popover {
|
90 |
position: relative;
|
@@ -117,6 +99,11 @@
|
|
117 |
content: var(--seed, "-1");
|
118 |
}
|
119 |
|
|
|
|
|
|
|
|
|
|
|
120 |
.tabs, .tabitem, .tab-nav, .tab-nav > .selected {
|
121 |
border-width: 0px;
|
122 |
}
|
|
|
67 |
#intro > div > svg:is(.dark *) {
|
68 |
fill: #10b981 !important;
|
69 |
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
70 |
|
71 |
.popover {
|
72 |
position: relative;
|
|
|
99 |
content: var(--seed, "-1");
|
100 |
}
|
101 |
|
102 |
+
#settings h3 {
|
103 |
+
color: var(--block-title-text-color) !important;
|
104 |
+
margin-top: 8px !important;
|
105 |
+
}
|
106 |
+
|
107 |
.tabs, .tabitem, .tab-nav, .tab-nav > .selected {
|
108 |
border-width: 0px;
|
109 |
}
|
app.py
CHANGED
@@ -15,7 +15,16 @@ from lib import (
|
|
15 |
read_file,
|
16 |
)
|
17 |
|
18 |
-
#
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
19 |
refresh_seed_js = """
|
20 |
() => {
|
21 |
const n = Math.floor(Math.random() * Number.MAX_SAFE_INTEGER);
|
@@ -25,14 +34,7 @@ refresh_seed_js = """
|
|
25 |
}
|
26 |
"""
|
27 |
|
28 |
-
|
29 |
-
(seed) => {
|
30 |
-
const button = document.getElementById("refresh");
|
31 |
-
button.style.setProperty("--seed", `"${seed}"`);
|
32 |
-
return seed;
|
33 |
-
}
|
34 |
-
"""
|
35 |
-
|
36 |
aspect_ratio_js = """
|
37 |
(ar, w, h) => {
|
38 |
if (!ar) return [w, h];
|
@@ -42,46 +44,27 @@ aspect_ratio_js = """
|
|
42 |
"""
|
43 |
|
44 |
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
-
|
49 |
-
|
50 |
-
|
51 |
-
|
52 |
-
|
53 |
-
|
54 |
-
|
55 |
-
|
56 |
-
value=-1,
|
57 |
-
)
|
58 |
-
|
59 |
-
|
60 |
-
async def gallery_fn(images, image, control_image, ip_image):
|
61 |
-
return (
|
62 |
-
image_prompt_fn(images, locked=image is not None),
|
63 |
-
image_prompt_fn(images, locked=control_image is not None),
|
64 |
-
image_prompt_fn(images, locked=ip_image is not None),
|
65 |
-
)
|
66 |
-
|
67 |
-
|
68 |
-
# Handle selecting an image from the gallery:
|
69 |
-
# * -2 is the lock icon
|
70 |
-
# * -1 is None
|
71 |
-
async def image_select_fn(images, image, i):
|
72 |
-
if i == -2:
|
73 |
-
return gr.Image(image)
|
74 |
-
if i == -1:
|
75 |
-
return gr.Image(None)
|
76 |
-
return gr.Image(images[i][0]) if i > -1 else None
|
77 |
|
78 |
|
|
|
79 |
async def random_fn():
|
80 |
prompts = read_file("data/prompts.json")
|
81 |
prompts = json.loads(prompts)
|
82 |
return gr.Textbox(value=random.choice(prompts))
|
83 |
|
84 |
|
|
|
85 |
async def generate_fn(*args, progress=gr.Progress(track_tqdm=True)):
|
86 |
if len(args) > 0:
|
87 |
prompt = args[0]
|
@@ -90,11 +73,11 @@ async def generate_fn(*args, progress=gr.Progress(track_tqdm=True)):
|
|
90 |
if prompt is None or prompt.strip() == "":
|
91 |
raise gr.Error("You must enter a prompt")
|
92 |
|
93 |
-
# always the last arguments
|
94 |
DISABLE_IMAGE_PROMPT, DISABLE_CONTROL_IMAGE_PROMPT, DISABLE_IP_IMAGE_PROMPT = args[-3:]
|
95 |
gen_args = list(args[:-3])
|
96 |
|
97 |
-
#
|
98 |
if DISABLE_IMAGE_PROMPT:
|
99 |
gen_args[2] = None
|
100 |
if DISABLE_CONTROL_IMAGE_PROMPT:
|
@@ -106,7 +89,7 @@ async def generate_fn(*args, progress=gr.Progress(track_tqdm=True)):
|
|
106 |
if Config.ZERO_GPU:
|
107 |
progress((0, 100), desc="ZeroGPU init")
|
108 |
|
109 |
-
#
|
110 |
images = await async_call(
|
111 |
generate,
|
112 |
*gen_args,
|
@@ -144,7 +127,7 @@ with gr.Blocks(
|
|
144 |
block_background_fill_dark=gr.themes.colors.gray.c900,
|
145 |
),
|
146 |
) as demo:
|
147 |
-
#
|
148 |
DISABLE_IMAGE_PROMPT = gr.State(False)
|
149 |
DISABLE_IP_IMAGE_PROMPT = gr.State(False)
|
150 |
DISABLE_CONTROL_IMAGE_PROMPT = gr.State(False)
|
@@ -152,7 +135,7 @@ with gr.Blocks(
|
|
152 |
gr.HTML(read_file("./partials/intro.html"))
|
153 |
|
154 |
with gr.Tabs():
|
155 |
-
with gr.TabItem("🏠
|
156 |
with gr.Column():
|
157 |
output_images = gr.Gallery(
|
158 |
elem_classes=["gallery"],
|
@@ -172,8 +155,6 @@ with gr.Blocks(
|
|
172 |
max_lines=3,
|
173 |
lines=3,
|
174 |
)
|
175 |
-
|
176 |
-
# Buttons
|
177 |
with gr.Row():
|
178 |
generate_btn = gr.Button("Generate", variant="primary")
|
179 |
random_btn = gr.Button(
|
@@ -199,8 +180,187 @@ with gr.Blocks(
|
|
199 |
value="🗑️",
|
200 |
)
|
201 |
|
202 |
-
|
203 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
204 |
with gr.Row():
|
205 |
image_prompt = gr.Image(
|
206 |
show_share_button=False,
|
@@ -224,36 +384,6 @@ with gr.Blocks(
|
|
224 |
format="png",
|
225 |
type="pil",
|
226 |
)
|
227 |
-
|
228 |
-
with gr.Row():
|
229 |
-
image_select = gr.Dropdown(
|
230 |
-
info="Use a gallery image for initial latents",
|
231 |
-
choices=[("None", -1)],
|
232 |
-
label="Initial Image",
|
233 |
-
interactive=True,
|
234 |
-
filterable=False,
|
235 |
-
min_width=100,
|
236 |
-
value=-1,
|
237 |
-
)
|
238 |
-
control_image_select = gr.Dropdown(
|
239 |
-
info="Use a gallery image for ControlNet",
|
240 |
-
label="ControlNet Image",
|
241 |
-
choices=[("None", -1)],
|
242 |
-
interactive=True,
|
243 |
-
filterable=False,
|
244 |
-
min_width=100,
|
245 |
-
value=-1,
|
246 |
-
)
|
247 |
-
ip_image_select = gr.Dropdown(
|
248 |
-
info="Use a gallery image for IP-Adapter",
|
249 |
-
label="IP-Adapter Image",
|
250 |
-
choices=[("None", -1)],
|
251 |
-
interactive=True,
|
252 |
-
filterable=False,
|
253 |
-
min_width=100,
|
254 |
-
value=-1,
|
255 |
-
)
|
256 |
-
|
257 |
with gr.Row():
|
258 |
denoising_strength = gr.Slider(
|
259 |
label="Initial Image Strength",
|
@@ -269,7 +399,6 @@ with gr.Blocks(
|
|
269 |
value=Config.ANNOTATOR,
|
270 |
filterable=False,
|
271 |
)
|
272 |
-
|
273 |
with gr.Row():
|
274 |
disable_image = gr.Checkbox(
|
275 |
label="Disable Initial Image",
|
@@ -292,202 +421,19 @@ with gr.Blocks(
|
|
292 |
value=False,
|
293 |
)
|
294 |
|
295 |
-
with gr.TabItem("
|
296 |
-
|
297 |
-
negative_prompt = gr.Textbox(
|
298 |
-
label="Negative Prompt",
|
299 |
-
value="nsfw+",
|
300 |
-
lines=2,
|
301 |
-
)
|
302 |
-
|
303 |
-
with gr.Row():
|
304 |
-
model = gr.Dropdown(
|
305 |
-
choices=Config.MODELS,
|
306 |
-
value=Config.MODEL,
|
307 |
-
filterable=False,
|
308 |
-
label="Model",
|
309 |
-
min_width=240,
|
310 |
-
)
|
311 |
-
scheduler = gr.Dropdown(
|
312 |
-
choices=Config.SCHEDULERS.keys(),
|
313 |
-
value=Config.SCHEDULER,
|
314 |
-
elem_id="scheduler",
|
315 |
-
label="Scheduler",
|
316 |
-
filterable=False,
|
317 |
-
)
|
318 |
-
|
319 |
-
with gr.Row():
|
320 |
-
styles = json.loads(read_file("data/styles.json"))
|
321 |
-
style_ids = list(styles.keys())
|
322 |
-
style_ids = [sid for sid in style_ids if not sid.startswith("_")]
|
323 |
-
style = gr.Dropdown(
|
324 |
-
value=Config.STYLE,
|
325 |
-
label="Style",
|
326 |
-
min_width=240,
|
327 |
-
choices=[("None", "none")]
|
328 |
-
+ [(styles[sid]["name"], sid) for sid in style_ids],
|
329 |
-
)
|
330 |
-
embeddings = gr.Dropdown(
|
331 |
-
elem_id="embeddings",
|
332 |
-
label="Embeddings",
|
333 |
-
choices=[(f"<{e}>", e) for e in Config.EMBEDDINGS],
|
334 |
-
multiselect=True,
|
335 |
-
value=[Config.EMBEDDING],
|
336 |
-
min_width=240,
|
337 |
-
)
|
338 |
-
|
339 |
-
with gr.Row():
|
340 |
-
with gr.Group(elem_classes=["gap-0"]):
|
341 |
-
lora_1 = gr.Dropdown(
|
342 |
-
min_width=240,
|
343 |
-
label="LoRA #1",
|
344 |
-
value="none",
|
345 |
-
choices=[("None", "none")]
|
346 |
-
+ [
|
347 |
-
(lora["name"], lora_id)
|
348 |
-
for lora_id, lora in Config.CIVIT_LORAS.items()
|
349 |
-
],
|
350 |
-
)
|
351 |
-
lora_1_weight = gr.Slider(
|
352 |
-
value=0.0,
|
353 |
-
minimum=0.0,
|
354 |
-
maximum=1.0,
|
355 |
-
step=0.1,
|
356 |
-
show_label=False,
|
357 |
-
)
|
358 |
-
with gr.Group(elem_classes=["gap-0"]):
|
359 |
-
lora_2 = gr.Dropdown(
|
360 |
-
min_width=240,
|
361 |
-
label="LoRA #2",
|
362 |
-
value="none",
|
363 |
-
choices=[("None", "none")]
|
364 |
-
+ [
|
365 |
-
(lora["name"], lora_id)
|
366 |
-
for lora_id, lora in Config.CIVIT_LORAS.items()
|
367 |
-
],
|
368 |
-
)
|
369 |
-
lora_2_weight = gr.Slider(
|
370 |
-
value=0.0,
|
371 |
-
minimum=0.0,
|
372 |
-
maximum=1.0,
|
373 |
-
step=0.1,
|
374 |
-
show_label=False,
|
375 |
-
)
|
376 |
-
|
377 |
-
with gr.Row():
|
378 |
-
guidance_scale = gr.Slider(
|
379 |
-
value=Config.GUIDANCE_SCALE,
|
380 |
-
label="Guidance Scale",
|
381 |
-
minimum=1.0,
|
382 |
-
maximum=15.0,
|
383 |
-
step=0.1,
|
384 |
-
)
|
385 |
-
inference_steps = gr.Slider(
|
386 |
-
value=Config.INFERENCE_STEPS,
|
387 |
-
label="Inference Steps",
|
388 |
-
minimum=1,
|
389 |
-
maximum=50,
|
390 |
-
step=1,
|
391 |
-
)
|
392 |
-
deepcache_interval = gr.Slider(
|
393 |
-
value=Config.DEEPCACHE_INTERVAL,
|
394 |
-
label="DeepCache",
|
395 |
-
minimum=1,
|
396 |
-
maximum=4,
|
397 |
-
step=1,
|
398 |
-
)
|
399 |
-
|
400 |
-
with gr.Row():
|
401 |
-
width = gr.Slider(
|
402 |
-
value=Config.WIDTH,
|
403 |
-
label="Width",
|
404 |
-
minimum=256,
|
405 |
-
maximum=768,
|
406 |
-
step=32,
|
407 |
-
)
|
408 |
-
height = gr.Slider(
|
409 |
-
value=Config.HEIGHT,
|
410 |
-
label="Height",
|
411 |
-
minimum=256,
|
412 |
-
maximum=768,
|
413 |
-
step=32,
|
414 |
-
)
|
415 |
-
aspect_ratio = gr.Dropdown(
|
416 |
-
value=f"{Config.WIDTH},{Config.HEIGHT}",
|
417 |
-
label="Aspect Ratio",
|
418 |
-
filterable=False,
|
419 |
-
choices=[
|
420 |
-
("Custom", None),
|
421 |
-
("4:7 (384x672)", "384,672"),
|
422 |
-
("7:9 (448x576)", "448,576"),
|
423 |
-
("1:1 (512x512)", "512,512"),
|
424 |
-
("9:7 (576x448)", "576,448"),
|
425 |
-
("7:4 (672x384)", "672,384"),
|
426 |
-
],
|
427 |
-
)
|
428 |
-
|
429 |
-
with gr.Row():
|
430 |
-
file_format = gr.Dropdown(
|
431 |
-
choices=["png", "jpeg", "webp"],
|
432 |
-
label="File Format",
|
433 |
-
filterable=False,
|
434 |
-
value="png",
|
435 |
-
)
|
436 |
-
num_images = gr.Dropdown(
|
437 |
-
choices=list(range(1, 5)),
|
438 |
-
value=Config.NUM_IMAGES,
|
439 |
-
filterable=False,
|
440 |
-
label="Images",
|
441 |
-
)
|
442 |
-
scale = gr.Dropdown(
|
443 |
-
choices=[(f"{s}x", s) for s in Config.SCALES],
|
444 |
-
filterable=False,
|
445 |
-
value=Config.SCALE,
|
446 |
-
label="Scale",
|
447 |
-
)
|
448 |
-
seed = gr.Number(
|
449 |
-
value=Config.SEED,
|
450 |
-
label="Seed",
|
451 |
-
minimum=-1,
|
452 |
-
maximum=(2**64) - 1,
|
453 |
-
)
|
454 |
-
|
455 |
-
with gr.Row():
|
456 |
-
use_karras = gr.Checkbox(
|
457 |
-
elem_classes=["checkbox"],
|
458 |
-
label="Karras σ",
|
459 |
-
value=True,
|
460 |
-
)
|
461 |
-
use_taesd = gr.Checkbox(
|
462 |
-
elem_classes=["checkbox"],
|
463 |
-
label="Tiny VAE",
|
464 |
-
value=False,
|
465 |
-
)
|
466 |
-
use_freeu = gr.Checkbox(
|
467 |
-
elem_classes=["checkbox"],
|
468 |
-
label="FreeU",
|
469 |
-
value=False,
|
470 |
-
)
|
471 |
-
use_clip_skip = gr.Checkbox(
|
472 |
-
elem_classes=["checkbox"],
|
473 |
-
label="Clip skip",
|
474 |
-
value=False,
|
475 |
-
)
|
476 |
|
|
|
477 |
random_btn.click(random_fn, inputs=[], outputs=[prompt], show_api=False)
|
478 |
|
|
|
479 |
refresh_btn.click(None, inputs=[], outputs=[seed], js=refresh_seed_js)
|
480 |
|
|
|
481 |
seed.change(None, inputs=[seed], outputs=[], js=seed_js)
|
482 |
|
483 |
-
#
|
484 |
-
aspect_ratio.input(
|
485 |
-
None,
|
486 |
-
inputs=[aspect_ratio, width, height],
|
487 |
-
outputs=[width, height],
|
488 |
-
js=aspect_ratio_js,
|
489 |
-
)
|
490 |
-
|
491 |
file_format.change(
|
492 |
lambda f: (
|
493 |
gr.Gallery(format=f),
|
@@ -500,76 +446,32 @@ with gr.Blocks(
|
|
500 |
show_api=False,
|
501 |
)
|
502 |
|
503 |
-
#
|
504 |
-
|
505 |
-
|
506 |
-
inputs=[
|
507 |
-
outputs=[
|
508 |
-
|
509 |
-
)
|
510 |
-
|
511 |
-
# show the selected image in the image input
|
512 |
-
image_select.change(
|
513 |
-
image_select_fn,
|
514 |
-
inputs=[output_images, image_prompt, image_select],
|
515 |
-
outputs=[image_prompt],
|
516 |
-
show_api=False,
|
517 |
-
)
|
518 |
-
control_image_select.change(
|
519 |
-
image_select_fn,
|
520 |
-
inputs=[output_images, control_image_prompt, control_image_select],
|
521 |
-
outputs=[control_image_prompt],
|
522 |
-
show_api=False,
|
523 |
-
)
|
524 |
-
ip_image_select.change(
|
525 |
-
image_select_fn,
|
526 |
-
inputs=[output_images, ip_image_prompt, ip_image_select],
|
527 |
-
outputs=[ip_image_prompt],
|
528 |
-
show_api=False,
|
529 |
-
)
|
530 |
-
|
531 |
-
# reset the dropdown on clear
|
532 |
-
image_prompt.clear(
|
533 |
-
image_prompt_fn,
|
534 |
-
inputs=[output_images],
|
535 |
-
outputs=[image_select],
|
536 |
-
show_api=False,
|
537 |
-
)
|
538 |
-
control_image_prompt.clear(
|
539 |
-
image_prompt_fn,
|
540 |
-
inputs=[output_images],
|
541 |
-
outputs=[control_image_select],
|
542 |
-
show_api=False,
|
543 |
-
)
|
544 |
-
ip_image_prompt.clear(
|
545 |
-
image_prompt_fn,
|
546 |
-
inputs=[output_images],
|
547 |
-
outputs=[ip_image_select],
|
548 |
-
show_api=False,
|
549 |
)
|
550 |
|
551 |
-
#
|
552 |
gr.on(
|
553 |
triggers=[width.input, height.input],
|
554 |
fn=None,
|
555 |
-
inputs=[],
|
556 |
outputs=[aspect_ratio],
|
557 |
-
js=
|
558 |
)
|
559 |
|
560 |
-
#
|
561 |
gr.on(
|
562 |
triggers=[disable_image.input, disable_control_image.input, disable_ip_image.input],
|
563 |
-
fn=lambda
|
564 |
-
disable_image,
|
565 |
-
disable_control_image,
|
566 |
-
disable_ip_image,
|
567 |
-
),
|
568 |
inputs=[disable_image, disable_control_image, disable_ip_image],
|
569 |
outputs=[DISABLE_IMAGE_PROMPT, DISABLE_CONTROL_IMAGE_PROMPT, DISABLE_IP_IMAGE_PROMPT],
|
570 |
)
|
571 |
|
572 |
-
#
|
573 |
gr.on(
|
574 |
triggers=[generate_btn.click, prompt.submit],
|
575 |
fn=generate_fn,
|
|
|
15 |
read_file,
|
16 |
)
|
17 |
|
18 |
+
# Update refresh button hover text
|
19 |
+
seed_js = """
|
20 |
+
(seed) => {
|
21 |
+
const button = document.getElementById("refresh");
|
22 |
+
button.style.setProperty("--seed", `"${seed}"`);
|
23 |
+
return seed;
|
24 |
+
}
|
25 |
+
"""
|
26 |
+
|
27 |
+
# The CSS `content` attribute expects a string so we need to wrap the number in quotes
|
28 |
refresh_seed_js = """
|
29 |
() => {
|
30 |
const n = Math.floor(Math.random() * Number.MAX_SAFE_INTEGER);
|
|
|
34 |
}
|
35 |
"""
|
36 |
|
37 |
+
# Update width and height on aspect ratio change
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
38 |
aspect_ratio_js = """
|
39 |
(ar, w, h) => {
|
40 |
if (!ar) return [w, h];
|
|
|
44 |
"""
|
45 |
|
46 |
|
47 |
+
# Show "Custom" aspect ratio when manually changing width or height, or one of the predefined ones
|
48 |
+
custom_aspect_ratio_js = """
|
49 |
+
(w, h) => {
|
50 |
+
if (w === 384 && h === 672) return "384,672";
|
51 |
+
if (w === 448 && h === 576) return "448,576";
|
52 |
+
if (w === 512 && h === 512) return "512,512";
|
53 |
+
if (w === 576 && h === 448) return "576,448";
|
54 |
+
if (w === 672 && h === 384) return "672,384";
|
55 |
+
return null;
|
56 |
+
}
|
57 |
+
"""
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
58 |
|
59 |
|
60 |
+
# Random prompt function
|
61 |
async def random_fn():
|
62 |
prompts = read_file("data/prompts.json")
|
63 |
prompts = json.loads(prompts)
|
64 |
return gr.Textbox(value=random.choice(prompts))
|
65 |
|
66 |
|
67 |
+
# Transform the raw inputs before generation
|
68 |
async def generate_fn(*args, progress=gr.Progress(track_tqdm=True)):
|
69 |
if len(args) > 0:
|
70 |
prompt = args[0]
|
|
|
73 |
if prompt is None or prompt.strip() == "":
|
74 |
raise gr.Error("You must enter a prompt")
|
75 |
|
76 |
+
# These are always the last arguments
|
77 |
DISABLE_IMAGE_PROMPT, DISABLE_CONTROL_IMAGE_PROMPT, DISABLE_IP_IMAGE_PROMPT = args[-3:]
|
78 |
gen_args = list(args[:-3])
|
79 |
|
80 |
+
# First two arguments are the prompt and negative prompt
|
81 |
if DISABLE_IMAGE_PROMPT:
|
82 |
gen_args[2] = None
|
83 |
if DISABLE_CONTROL_IMAGE_PROMPT:
|
|
|
89 |
if Config.ZERO_GPU:
|
90 |
progress((0, 100), desc="ZeroGPU init")
|
91 |
|
92 |
+
# Remaining arguments are the alert handlers and progress bar
|
93 |
images = await async_call(
|
94 |
generate,
|
95 |
*gen_args,
|
|
|
127 |
block_background_fill_dark=gr.themes.colors.gray.c900,
|
128 |
),
|
129 |
) as demo:
|
130 |
+
# Disable image inputs without clearing them
|
131 |
DISABLE_IMAGE_PROMPT = gr.State(False)
|
132 |
DISABLE_IP_IMAGE_PROMPT = gr.State(False)
|
133 |
DISABLE_CONTROL_IMAGE_PROMPT = gr.State(False)
|
|
|
135 |
gr.HTML(read_file("./partials/intro.html"))
|
136 |
|
137 |
with gr.Tabs():
|
138 |
+
with gr.TabItem("🏠 Home"):
|
139 |
with gr.Column():
|
140 |
output_images = gr.Gallery(
|
141 |
elem_classes=["gallery"],
|
|
|
155 |
max_lines=3,
|
156 |
lines=3,
|
157 |
)
|
|
|
|
|
158 |
with gr.Row():
|
159 |
generate_btn = gr.Button("Generate", variant="primary")
|
160 |
random_btn = gr.Button(
|
|
|
180 |
value="🗑️",
|
181 |
)
|
182 |
|
183 |
+
with gr.TabItem("⚙️ Settings", elem_id="settings"):
|
184 |
+
# Prompt settings
|
185 |
+
gr.HTML("<h3>Prompt</h3>")
|
186 |
+
with gr.Row():
|
187 |
+
negative_prompt = gr.Textbox(
|
188 |
+
label="Negative Prompt",
|
189 |
+
value="nsfw+",
|
190 |
+
lines=1,
|
191 |
+
)
|
192 |
+
styles = json.loads(read_file("data/styles.json"))
|
193 |
+
style_ids = list(styles.keys())
|
194 |
+
style_ids = [sid for sid in style_ids if not sid.startswith("_")]
|
195 |
+
style = gr.Dropdown(
|
196 |
+
value=Config.STYLE,
|
197 |
+
label="Style Template",
|
198 |
+
choices=[("None", "none")] + [(styles[sid]["name"], sid) for sid in style_ids],
|
199 |
+
)
|
200 |
+
|
201 |
+
# Model settings
|
202 |
+
gr.HTML("<h3>Model</h3>")
|
203 |
+
with gr.Row():
|
204 |
+
model = gr.Dropdown(
|
205 |
+
choices=Config.MODELS,
|
206 |
+
value=Config.MODEL,
|
207 |
+
filterable=False,
|
208 |
+
label="Checkpoint",
|
209 |
+
min_width=240,
|
210 |
+
)
|
211 |
+
scheduler = gr.Dropdown(
|
212 |
+
choices=Config.SCHEDULERS.keys(),
|
213 |
+
value=Config.SCHEDULER,
|
214 |
+
elem_id="scheduler",
|
215 |
+
label="Scheduler",
|
216 |
+
filterable=False,
|
217 |
+
)
|
218 |
+
with gr.Row():
|
219 |
+
embeddings = gr.Dropdown(
|
220 |
+
elem_id="embeddings",
|
221 |
+
label="Embeddings",
|
222 |
+
choices=[(f"<{e}>", e) for e in Config.EMBEDDINGS],
|
223 |
+
multiselect=True,
|
224 |
+
value=[Config.EMBEDDING],
|
225 |
+
min_width=240,
|
226 |
+
)
|
227 |
+
with gr.Row():
|
228 |
+
with gr.Group(elem_classes=["gap-0"]):
|
229 |
+
lora_1 = gr.Dropdown(
|
230 |
+
min_width=240,
|
231 |
+
label="LoRA #1",
|
232 |
+
value="none",
|
233 |
+
choices=[("None", "none")]
|
234 |
+
+ [
|
235 |
+
(lora["name"], lora_id) for lora_id, lora in Config.CIVIT_LORAS.items()
|
236 |
+
],
|
237 |
+
)
|
238 |
+
lora_1_weight = gr.Slider(
|
239 |
+
value=0.0,
|
240 |
+
minimum=0.0,
|
241 |
+
maximum=1.0,
|
242 |
+
step=0.1,
|
243 |
+
show_label=False,
|
244 |
+
)
|
245 |
+
with gr.Group(elem_classes=["gap-0"]):
|
246 |
+
lora_2 = gr.Dropdown(
|
247 |
+
min_width=240,
|
248 |
+
label="LoRA #2",
|
249 |
+
value="none",
|
250 |
+
choices=[("None", "none")]
|
251 |
+
+ [
|
252 |
+
(lora["name"], lora_id) for lora_id, lora in Config.CIVIT_LORAS.items()
|
253 |
+
],
|
254 |
+
)
|
255 |
+
lora_2_weight = gr.Slider(
|
256 |
+
value=0.0,
|
257 |
+
minimum=0.0,
|
258 |
+
maximum=1.0,
|
259 |
+
step=0.1,
|
260 |
+
show_label=False,
|
261 |
+
)
|
262 |
+
|
263 |
+
# Generation settings
|
264 |
+
gr.HTML("<h3>Generation</h3>")
|
265 |
+
with gr.Row():
|
266 |
+
guidance_scale = gr.Slider(
|
267 |
+
value=Config.GUIDANCE_SCALE,
|
268 |
+
label="Guidance Scale",
|
269 |
+
minimum=1.0,
|
270 |
+
maximum=15.0,
|
271 |
+
step=0.1,
|
272 |
+
)
|
273 |
+
inference_steps = gr.Slider(
|
274 |
+
value=Config.INFERENCE_STEPS,
|
275 |
+
label="Inference Steps",
|
276 |
+
minimum=1,
|
277 |
+
maximum=50,
|
278 |
+
step=1,
|
279 |
+
)
|
280 |
+
deepcache_interval = gr.Slider(
|
281 |
+
value=Config.DEEPCACHE_INTERVAL,
|
282 |
+
label="DeepCache",
|
283 |
+
minimum=1,
|
284 |
+
maximum=4,
|
285 |
+
step=1,
|
286 |
+
)
|
287 |
+
with gr.Row():
|
288 |
+
width = gr.Slider(
|
289 |
+
value=Config.WIDTH,
|
290 |
+
label="Width",
|
291 |
+
minimum=256,
|
292 |
+
maximum=768,
|
293 |
+
step=32,
|
294 |
+
)
|
295 |
+
height = gr.Slider(
|
296 |
+
value=Config.HEIGHT,
|
297 |
+
label="Height",
|
298 |
+
minimum=256,
|
299 |
+
maximum=768,
|
300 |
+
step=32,
|
301 |
+
)
|
302 |
+
aspect_ratio = gr.Dropdown(
|
303 |
+
value=f"{Config.WIDTH},{Config.HEIGHT}",
|
304 |
+
label="Aspect Ratio",
|
305 |
+
filterable=False,
|
306 |
+
choices=[
|
307 |
+
("Custom", None),
|
308 |
+
("4:7 (384x672)", "384,672"),
|
309 |
+
("7:9 (448x576)", "448,576"),
|
310 |
+
("1:1 (512x512)", "512,512"),
|
311 |
+
("9:7 (576x448)", "576,448"),
|
312 |
+
("7:4 (672x384)", "672,384"),
|
313 |
+
],
|
314 |
+
)
|
315 |
+
with gr.Row():
|
316 |
+
file_format = gr.Dropdown(
|
317 |
+
choices=["png", "jpeg", "webp"],
|
318 |
+
label="File Format",
|
319 |
+
filterable=False,
|
320 |
+
value="png",
|
321 |
+
)
|
322 |
+
num_images = gr.Dropdown(
|
323 |
+
choices=list(range(1, 5)),
|
324 |
+
value=Config.NUM_IMAGES,
|
325 |
+
filterable=False,
|
326 |
+
label="Images",
|
327 |
+
)
|
328 |
+
scale = gr.Dropdown(
|
329 |
+
choices=[(f"{s}x", s) for s in Config.SCALES],
|
330 |
+
filterable=False,
|
331 |
+
value=Config.SCALE,
|
332 |
+
label="Scale",
|
333 |
+
)
|
334 |
+
seed = gr.Number(
|
335 |
+
value=Config.SEED,
|
336 |
+
label="Seed",
|
337 |
+
minimum=-1,
|
338 |
+
maximum=(2**64) - 1,
|
339 |
+
)
|
340 |
+
with gr.Row():
|
341 |
+
use_karras = gr.Checkbox(
|
342 |
+
elem_classes=["checkbox"],
|
343 |
+
label="Karras σ",
|
344 |
+
value=True,
|
345 |
+
)
|
346 |
+
use_taesd = gr.Checkbox(
|
347 |
+
elem_classes=["checkbox"],
|
348 |
+
label="Tiny VAE",
|
349 |
+
value=False,
|
350 |
+
)
|
351 |
+
use_freeu = gr.Checkbox(
|
352 |
+
elem_classes=["checkbox"],
|
353 |
+
label="FreeU",
|
354 |
+
value=False,
|
355 |
+
)
|
356 |
+
use_clip_skip = gr.Checkbox(
|
357 |
+
elem_classes=["checkbox"],
|
358 |
+
label="Clip skip",
|
359 |
+
value=False,
|
360 |
+
)
|
361 |
+
|
362 |
+
# Image-to-Image settings
|
363 |
+
gr.HTML("<h3>Image-to-Image</h3>")
|
364 |
with gr.Row():
|
365 |
image_prompt = gr.Image(
|
366 |
show_share_button=False,
|
|
|
384 |
format="png",
|
385 |
type="pil",
|
386 |
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
387 |
with gr.Row():
|
388 |
denoising_strength = gr.Slider(
|
389 |
label="Initial Image Strength",
|
|
|
399 |
value=Config.ANNOTATOR,
|
400 |
filterable=False,
|
401 |
)
|
|
|
402 |
with gr.Row():
|
403 |
disable_image = gr.Checkbox(
|
404 |
label="Disable Initial Image",
|
|
|
421 |
value=False,
|
422 |
)
|
423 |
|
424 |
+
with gr.TabItem("ℹ️ Info"):
|
425 |
+
gr.Markdown(read_file("DOCS.md"))
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
426 |
|
427 |
+
# Random prompt on click
|
428 |
random_btn.click(random_fn, inputs=[], outputs=[prompt], show_api=False)
|
429 |
|
430 |
+
# Update seed on click
|
431 |
refresh_btn.click(None, inputs=[], outputs=[seed], js=refresh_seed_js)
|
432 |
|
433 |
+
# Update seed button hover text
|
434 |
seed.change(None, inputs=[seed], outputs=[], js=seed_js)
|
435 |
|
436 |
+
# Update image prompts file format
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
437 |
file_format.change(
|
438 |
lambda f: (
|
439 |
gr.Gallery(format=f),
|
|
|
446 |
show_api=False,
|
447 |
)
|
448 |
|
449 |
+
# Update width and height on aspect ratio change
|
450 |
+
aspect_ratio.input(
|
451 |
+
None,
|
452 |
+
inputs=[aspect_ratio, width, height],
|
453 |
+
outputs=[width, height],
|
454 |
+
js=aspect_ratio_js,
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
455 |
)
|
456 |
|
457 |
+
# Show "Custom" aspect ratio when manually changing width or height
|
458 |
gr.on(
|
459 |
triggers=[width.input, height.input],
|
460 |
fn=None,
|
461 |
+
inputs=[width, height],
|
462 |
outputs=[aspect_ratio],
|
463 |
+
js=custom_aspect_ratio_js,
|
464 |
)
|
465 |
|
466 |
+
# Toggle image prompts by updating session state
|
467 |
gr.on(
|
468 |
triggers=[disable_image.input, disable_control_image.input, disable_ip_image.input],
|
469 |
+
fn=lambda image, control_image, ip_image: (image, control_image, ip_image),
|
|
|
|
|
|
|
|
|
470 |
inputs=[disable_image, disable_control_image, disable_ip_image],
|
471 |
outputs=[DISABLE_IMAGE_PROMPT, DISABLE_CONTROL_IMAGE_PROMPT, DISABLE_IP_IMAGE_PROMPT],
|
472 |
)
|
473 |
|
474 |
+
# Generate images
|
475 |
gr.on(
|
476 |
triggers=[generate_btn.click, prompt.submit],
|
477 |
fn=generate_fn,
|
partials/intro.html
CHANGED
@@ -7,18 +7,7 @@
|
|
7 |
<path d="M7.48877 6.75C7.29015 6.75 7.09967 6.82902 6.95923 6.96967C6.81879 7.11032 6.73989 7.30109 6.73989 7.5C6.73989 7.69891 6.81879 7.88968 6.95923 8.03033C7.09967 8.17098 7.29015 8.25 7.48877 8.25C7.68738 8.25 7.87786 8.17098 8.0183 8.03033C8.15874 7.88968 8.23764 7.69891 8.23764 7.5C8.23764 7.30109 8.15874 7.11032 8.0183 6.96967C7.87786 6.82902 7.68738 6.75 7.48877 6.75ZM7.8632 0C11.2331 0 11.3155 2.6775 9.54818 3.5625C8.80679 3.93 8.47728 4.7175 8.335 5.415C8.69446 5.565 9.00899 5.7975 9.24863 6.0975C12.0195 4.5975 15 5.19 15 7.875C15 11.25 12.3265 11.325 11.4428 9.5475C11.0684 8.805 10.2746 8.475 9.57813 8.3325C9.42836 8.6925 9.19621 9 8.89665 9.255C10.3869 12.0225 9.79531 15 7.11433 15C3.74438 15 3.67698 12.315 5.44433 11.43C6.17823 11.0625 6.50774 10.2825 6.65751 9.5925C6.29056 9.4425 5.96855 9.2025 5.72891 8.9025C2.96555 10.3875 0 9.8025 0 7.125C0 3.75 2.666 3.6675 3.54967 5.445C3.92411 6.1875 4.71043 6.51 5.40689 6.6525C5.54918 6.2925 5.78882 5.9775 6.09586 5.7375C4.60559 2.97 5.1972 0 7.8632 0Z"></path>
|
8 |
</svg>
|
9 |
</div>
|
10 |
-
<
|
11 |
-
|
12 |
-
|
13 |
-
<a href="https://huggingface.co/spaces/adamelliotfields/diffusion-xl" target="_blank" rel="noopener noreferrer">XL</a>
|
14 |
-
<a href="https://huggingface.co/spaces/adamelliotfields/diffusion-flux" target="_blank" rel="noopener noreferrer">FLUX.1</a>
|
15 |
-
<a href="https://huggingface.co/spaces/adamelliotfields/diffusion/blob/main/DOCS.md" target="_blank" rel="noopener noreferrer">Docs</a>
|
16 |
-
<a href="https://adamelliotfields-diffusion.hf.space" target="_blank" rel="noopener noreferrer">
|
17 |
-
<svg style="display: inline-block" width="16px" height="16px" viewBox="0 0 12 12" fill="currentColor" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" preserveAspectRatio="xMidYMid meet">
|
18 |
-
<path fill-rule="evenodd" clip-rule="evenodd" d="M7.5 1.75H9.75C9.88807 1.75 10 1.86193 10 2V4.25C10 4.38807 9.88807 4.5 9.75 4.5C9.61193 4.5 9.5 4.38807 9.5 4.25V2.60355L6.42678 5.67678C6.32915 5.77441 6.17085 5.77441 6.07322 5.67678C5.97559 5.57915 5.97559 5.42085 6.07322 5.32322L9.14645 2.25H7.5C7.36193 2.25 7.25 2.13807 7.25 2C7.25 1.86193 7.36193 1.75 7.5 1.75Z" fill="currentColor"></path>
|
19 |
-
<path fill-rule="evenodd" clip-rule="evenodd" d="M6 2.5C6 2.22386 5.77614 2 5.5 2H2.69388C2.50985 2 2.33336 2.07311 2.20323 2.20323C2.0731 2.33336 2 2.50986 2 2.69389V8.93885C2 9.12288 2.0731 9.29933 2.20323 9.42953C2.33336 9.55963 2.50985 9.63273 2.69388 9.63273H8.93884C9.12287 9.63273 9.29941 9.55963 9.42951 9.42953C9.55961 9.29933 9.63271 9.12288 9.63271 8.93885V6.5C9.63271 6.22386 9.40885 6 9.13271 6C8.85657 6 8.63271 6.22386 8.63271 6.5V8.63273H3V3H5.5C5.77614 3 6 2.77614 6 2.5Z" fill="currentColor" fill-opacity="0.3"></path>
|
20 |
-
</svg>
|
21 |
-
</a>
|
22 |
-
</nav>
|
23 |
-
</div>
|
24 |
</div>
|
|
|
7 |
<path d="M7.48877 6.75C7.29015 6.75 7.09967 6.82902 6.95923 6.96967C6.81879 7.11032 6.73989 7.30109 6.73989 7.5C6.73989 7.69891 6.81879 7.88968 6.95923 8.03033C7.09967 8.17098 7.29015 8.25 7.48877 8.25C7.68738 8.25 7.87786 8.17098 8.0183 8.03033C8.15874 7.88968 8.23764 7.69891 8.23764 7.5C8.23764 7.30109 8.15874 7.11032 8.0183 6.96967C7.87786 6.82902 7.68738 6.75 7.48877 6.75ZM7.8632 0C11.2331 0 11.3155 2.6775 9.54818 3.5625C8.80679 3.93 8.47728 4.7175 8.335 5.415C8.69446 5.565 9.00899 5.7975 9.24863 6.0975C12.0195 4.5975 15 5.19 15 7.875C15 11.25 12.3265 11.325 11.4428 9.5475C11.0684 8.805 10.2746 8.475 9.57813 8.3325C9.42836 8.6925 9.19621 9 8.89665 9.255C10.3869 12.0225 9.79531 15 7.11433 15C3.74438 15 3.67698 12.315 5.44433 11.43C6.17823 11.0625 6.50774 10.2825 6.65751 9.5925C6.29056 9.4425 5.96855 9.2025 5.72891 8.9025C2.96555 10.3875 0 9.8025 0 7.125C0 3.75 2.666 3.6675 3.54967 5.445C3.92411 6.1875 4.71043 6.51 5.40689 6.6525C5.54918 6.2925 5.78882 5.9775 6.09586 5.7375C4.60559 2.97 5.1972 0 7.8632 0Z"></path>
|
8 |
</svg>
|
9 |
</div>
|
10 |
+
<p>
|
11 |
+
Stable Diffusion on ZeroGPU.
|
12 |
+
</p>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
13 |
</div>
|