adamelliotfields commited on
Commit
79ce657
1 Parent(s): 51fab87

Simplify textual inversion embeddings

Browse files
DOCS.md CHANGED
@@ -41,16 +41,6 @@ Apply up to 2 LoRA (low-rank adaptation) adapters with adjustable strength:
41
 
42
  > NB: The trigger words are automatically appended to the positive prompt for you.
43
 
44
- ### Embeddings
45
-
46
- Select one or more [textual inversion](https://huggingface.co/docs/diffusers/en/using-diffusers/textual_inversion_inference) embeddings:
47
-
48
- * [`fast_negative`](https://civitai.com/models/71961?modelVersionId=94057): all-purpose (default, **recommended**)
49
- * [`cyberrealistic_negative`](https://civitai.com/models/77976?modelVersionId=82745): realistic add-on (for CyberRealistic)
50
- * [`unrealistic_dream`](https://civitai.com/models/72437?modelVersionId=77173): realistic add-on (for RealisticVision)
51
-
52
- > NB: The trigger token is automatically appended to the negative prompt for you.
53
-
54
  ### Styles
55
 
56
  [Styles](https://huggingface.co/spaces/adamelliotfields/diffusion/blob/main/data/styles.json) are prompt templates that wrap your positive and negative prompts. They were originally derived from the [twri/sdxl_prompt_styler](https://github.com/twri/sdxl_prompt_styler) Comfy node, but have since been entirely rewritten.
@@ -83,7 +73,7 @@ Initial image strength (known as _denoising strength_) is essentially how much t
83
 
84
  #### ControlNet
85
 
86
- In [ControlNet](https://github.com/lllyasviel/ControlNet), the input image is used to get a feature map from an _annotator_. These are computer vision models used for tasks like edge detection and pose estimation. ControlNet models are trained to understand these feature maps. Read the [Diffusers docs](https://huggingface.co/docs/diffusers/using-diffusers/controlnet) to learn more.
87
 
88
  Currently, the only annotator available is [Canny](https://huggingface.co/lllyasviel/control_v11p_sd15_canny) (edge detection).
89
 
@@ -95,6 +85,10 @@ For capturing faces, enable `IP-Adapter Face` to use the full-face model. You sh
95
 
96
  ### Advanced
97
 
 
 
 
 
98
  #### DeepCache
99
 
100
  [DeepCache](https://github.com/horseee/DeepCache) caches lower UNet layers and reuses them every `Interval` steps. Trade quality for speed:
 
41
 
42
  > NB: The trigger words are automatically appended to the positive prompt for you.
43
 
 
 
 
 
 
 
 
 
 
 
44
  ### Styles
45
 
46
  [Styles](https://huggingface.co/spaces/adamelliotfields/diffusion/blob/main/data/styles.json) are prompt templates that wrap your positive and negative prompts. They were originally derived from the [twri/sdxl_prompt_styler](https://github.com/twri/sdxl_prompt_styler) Comfy node, but have since been entirely rewritten.
 
73
 
74
  #### ControlNet
75
 
76
+ In [ControlNet](https://github.com/lllyasviel/ControlNet), the input image is used to get a feature map from an _annotator_. These are computer vision models used for tasks like edge detection and pose estimation. ControlNet models are trained to understand these feature maps. Read the [docs](https://huggingface.co/docs/diffusers/using-diffusers/controlnet) to learn more.
77
 
78
  Currently, the only annotator available is [Canny](https://huggingface.co/lllyasviel/control_v11p_sd15_canny) (edge detection).
79
 
 
85
 
86
  ### Advanced
87
 
88
+ #### Textual Inversion
89
+
90
+ Enable `Use negative TI` to append [`fast_negative`](https://civitai.com/models/71961?modelVersionId=94057) to your negative prompt. Read [An Image is Worth One Word](https://huggingface.co/papers/2208.01618) to learn more.
91
+
92
  #### DeepCache
93
 
94
  [DeepCache](https://github.com/horseee/DeepCache) caches lower UNet layers and reuses them every `Interval` steps. Trade quality for speed:
app.py CHANGED
@@ -215,15 +215,6 @@ with gr.Blocks(
215
  label="Scheduler",
216
  filterable=False,
217
  )
218
- with gr.Row():
219
- embeddings = gr.Dropdown(
220
- elem_id="embeddings",
221
- label="Embeddings",
222
- choices=[(f"<{e}>", e) for e in Config.EMBEDDINGS],
223
- multiselect=True,
224
- value=[Config.EMBEDDING],
225
- min_width=240,
226
- )
227
  with gr.Row():
228
  with gr.Group(elem_classes=["gap-0"]):
229
  lora_1 = gr.Dropdown(
@@ -315,7 +306,7 @@ with gr.Blocks(
315
  with gr.Row():
316
  file_format = gr.Dropdown(
317
  choices=["png", "jpeg", "webp"],
318
- label="File Format",
319
  filterable=False,
320
  value="png",
321
  )
@@ -343,6 +334,11 @@ with gr.Blocks(
343
  label="Karras σ",
344
  value=True,
345
  )
 
 
 
 
 
346
  use_taesd = gr.Checkbox(
347
  elem_classes=["checkbox"],
348
  label="Tiny VAE",
@@ -487,7 +483,6 @@ with gr.Blocks(
487
  lora_1_weight,
488
  lora_2,
489
  lora_2_weight,
490
- embeddings,
491
  style,
492
  seed,
493
  model,
@@ -506,6 +501,7 @@ with gr.Blocks(
506
  use_freeu,
507
  use_clip_skip,
508
  use_ip_face,
 
509
  DISABLE_IMAGE_PROMPT,
510
  DISABLE_CONTROL_IMAGE_PROMPT,
511
  DISABLE_IP_IMAGE_PROMPT,
 
215
  label="Scheduler",
216
  filterable=False,
217
  )
 
 
 
 
 
 
 
 
 
218
  with gr.Row():
219
  with gr.Group(elem_classes=["gap-0"]):
220
  lora_1 = gr.Dropdown(
 
306
  with gr.Row():
307
  file_format = gr.Dropdown(
308
  choices=["png", "jpeg", "webp"],
309
+ label="Format",
310
  filterable=False,
311
  value="png",
312
  )
 
334
  label="Karras σ",
335
  value=True,
336
  )
337
+ use_negative_embedding = gr.Checkbox(
338
+ elem_classes=["checkbox"],
339
+ label="Use negative TI",
340
+ value=False,
341
+ )
342
  use_taesd = gr.Checkbox(
343
  elem_classes=["checkbox"],
344
  label="Tiny VAE",
 
483
  lora_1_weight,
484
  lora_2,
485
  lora_2_weight,
 
486
  style,
487
  seed,
488
  model,
 
501
  use_freeu,
502
  use_clip_skip,
503
  use_ip_face,
504
+ use_negative_embedding,
505
  DISABLE_IMAGE_PROMPT,
506
  DISABLE_CONTROL_IMAGE_PROMPT,
507
  DISABLE_IP_IMAGE_PROMPT,
embeddings/cyberrealistic_negative.pt DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:65f3ea567c04c22f92024c5b55cbeca580bc330c4290aeb647ebd86273b3ffb8
3
- size 197662
 
 
 
 
embeddings/unrealistic_dream.pt DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:a77451e7ea075c7f72d488d2b740b3d3970c671c0ac39dd3155f3c3b129df959
3
- size 114539
 
 
 
 
lib/config.py CHANGED
@@ -140,12 +140,7 @@ Config = SimpleNamespace(
140
  ANNOTATORS={
141
  "canny": "lllyasviel/control_v11p_sd15_canny",
142
  },
143
- EMBEDDING="fast_negative",
144
- EMBEDDINGS=[
145
- "cyberrealistic_negative",
146
- "fast_negative",
147
- "unrealistic_dream",
148
- ],
149
  STYLE="enhance",
150
  WIDTH=512,
151
  HEIGHT=512,
 
140
  ANNOTATORS={
141
  "canny": "lllyasviel/control_v11p_sd15_canny",
142
  },
143
+ NEGATIVE_EMBEDDING="fast_negative",
 
 
 
 
 
144
  STYLE="enhance",
145
  WIDTH=512,
146
  HEIGHT=512,
lib/inference.py CHANGED
@@ -70,7 +70,6 @@ def generate(
70
  lora_1_weight=0.0,
71
  lora_2=None,
72
  lora_2_weight=0.0,
73
- embeddings=[],
74
  style=None,
75
  seed=None,
76
  model="Lykon/dreamshaper-8",
@@ -89,6 +88,7 @@ def generate(
89
  freeu=False,
90
  clip_skip=False,
91
  ip_face=False,
 
92
  Error=Exception,
93
  Info=None,
94
  progress=None,
@@ -193,11 +193,13 @@ def generate(
193
  pipe.unload_lora_weights()
194
  raise Error("Error setting LoRA weights")
195
 
196
- # load embeddings
197
- embeddings_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), "..", "embeddings"))
198
- for embedding in embeddings:
 
 
 
199
  try:
200
- # wrap embeddings in angle brackets
201
  pipe.load_textual_inversion(
202
  pretrained_model_name_or_path=f"{embeddings_dir}/{embedding}.pt",
203
  token=f"<{embedding}>",
@@ -219,6 +221,7 @@ def generate(
219
  images = []
220
  current_seed = seed
221
  safe_progress(progress, 0, num_images, f"Generating image 0/{num_images}")
 
222
  for i in range(num_images):
223
  try:
224
  generator = torch.Generator(device=pipe.device).manual_seed(current_seed)
@@ -228,12 +231,12 @@ def generate(
228
  if negative_styled.startswith("(), "):
229
  negative_styled = negative_styled[4:]
230
 
 
 
 
231
  for lora in loras:
232
  positive_styled += f", {Config.CIVIT_LORAS[lora]['trigger']}"
233
 
234
- for embedding in embeddings:
235
- negative_styled += f", <{embedding}>"
236
-
237
  positive_embeds, negative_embeds = compel.pad_conditioning_tensors_to_same_length(
238
  [compel(positive_styled), compel(negative_styled)]
239
  )
@@ -273,7 +276,7 @@ def generate(
273
  images.append((image, str(current_seed)))
274
  current_seed += 1
275
  finally:
276
- if embeddings:
277
  pipe.unload_textual_inversion()
278
  if loras:
279
  pipe.unload_lora_weights()
 
70
  lora_1_weight=0.0,
71
  lora_2=None,
72
  lora_2_weight=0.0,
 
73
  style=None,
74
  seed=None,
75
  model="Lykon/dreamshaper-8",
 
88
  freeu=False,
89
  clip_skip=False,
90
  ip_face=False,
91
+ negative_embedding=False,
92
  Error=Exception,
93
  Info=None,
94
  progress=None,
 
193
  pipe.unload_lora_weights()
194
  raise Error("Error setting LoRA weights")
195
 
196
+ # Load negative embedding if requested
197
+ if negative_embedding:
198
+ embeddings_dir = os.path.abspath(
199
+ os.path.join(os.path.dirname(__file__), "..", "embeddings")
200
+ )
201
+ embedding = Config.NEGATIVE_EMBEDDING
202
  try:
 
203
  pipe.load_textual_inversion(
204
  pretrained_model_name_or_path=f"{embeddings_dir}/{embedding}.pt",
205
  token=f"<{embedding}>",
 
221
  images = []
222
  current_seed = seed
223
  safe_progress(progress, 0, num_images, f"Generating image 0/{num_images}")
224
+
225
  for i in range(num_images):
226
  try:
227
  generator = torch.Generator(device=pipe.device).manual_seed(current_seed)
 
231
  if negative_styled.startswith("(), "):
232
  negative_styled = negative_styled[4:]
233
 
234
+ if negative_embedding:
235
+ negative_styled += f", <{Config.NEGATIVE_EMBEDDING}>"
236
+
237
  for lora in loras:
238
  positive_styled += f", {Config.CIVIT_LORAS[lora]['trigger']}"
239
 
 
 
 
240
  positive_embeds, negative_embeds = compel.pad_conditioning_tensors_to_same_length(
241
  [compel(positive_styled), compel(negative_styled)]
242
  )
 
276
  images.append((image, str(current_seed)))
277
  current_seed += 1
278
  finally:
279
+ if negative_embedding:
280
  pipe.unload_textual_inversion()
281
  if loras:
282
  pipe.unload_lora_weights()