Text-to-Image
Diffusers
stable-diffusion
PeterL1n commited on
Commit
8d5ebb6
1 Parent(s): 0c78f72

Add readme

Browse files
README.md ADDED
@@ -0,0 +1,112 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: openrail++
3
+ tags:
4
+ - text-to-image
5
+ - stable-diffusion
6
+ ---
7
+
8
+ # SDXL-Lightning
9
+
10
+ ![Intro Image](images/intro.jpg)
11
+
12
+ SDXL-Lightning is a lightning fast text-to-image generative model. It can generate high-quality 1024px images under a few steps. For more information, please refer to our paper: [SDXL-Lightning: Progressive Adversarial Diffusion Distillation](). The models are released for research purposes only.
13
+
14
+ Our models are distilled from [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0). This repository contains checkpoints for 1-step, 2-step, 4-step, and 8-step distilled models.
15
+
16
+ We provide both full UNet and LoRA checkpoints. The full UNet models have the best quality while the LoRA models can be applied to other base models.
17
+
18
+
19
+ ## Diffusers Usage
20
+
21
+ Please always use the correct checkpoint for the corresponding inference steps.
22
+
23
+ ### 2-Step, 4-Step, 8-Step UNet
24
+
25
+ ```python
26
+ import torch
27
+ from diffusers import StableDiffusionXLPipeline, EulerDiscreteScheduler
28
+ from huggingface_hub import hf_hub_download
29
+
30
+ base = "stabilityai/stable-diffusion-xl-base-1.0"
31
+ repo = "bytedance/sdxl-lightning"
32
+ ckpt = "sdxl_lightning_4step_unet.pth" # Use the correct ckpt for your step setting!
33
+
34
+ # Load model.
35
+ pipe = StableDiffusionXLPipeline.from_pretrained(base, torch_dtype=torch.float16, variant="fp16").to("cuda")
36
+ pipe.unet.load_state_dict(torch.load(hf_hub_download(repo, ckpt), map_location="cuda"))
37
+
38
+ # Ensure sampler uses "trailing" timesteps.
39
+ pipe.scheduler = EulerDiscreteScheduler.from_config(pipe.scheduler.config, timestep_spacing="trailing")
40
+
41
+ # Ensure using the same inference steps as the loaded model and CFG set to 0.
42
+ pipe("A girl smiling", num_inference_steps=4, guidance_scale=0).images[0].save("output.png")
43
+ ```
44
+
45
+ ### 2-Step, 4-Step, 8-Step LoRA
46
+
47
+ ```python
48
+ import torch
49
+ from diffusers import StableDiffusionXLPipeline, EulerDiscreteScheduler
50
+ from huggingface_hub import hf_hub_download
51
+
52
+ base = "stabilityai/stable-diffusion-xl-base-1.0"
53
+ repo = "bytedance/sdxl-lightning"
54
+ ckpt = "sdxl_lightning_4step_lora.pth" # Use the correct ckpt for your step setting!
55
+
56
+ # Load model.
57
+ pipe = StableDiffusionXLPipeline.from_pretrained(base, torch_dtype=torch.float16, variant="fp16").to("cuda")
58
+ pipe.load_lora_weights(hf_hub_download(repo, ckpt))
59
+ pipe.fuse_lora()
60
+
61
+ # Ensure sampler uses "trailing" timesteps.
62
+ pipe.scheduler = EulerDiscreteScheduler.from_config(pipe.scheduler.config, timestep_spacing="trailing")
63
+
64
+ # Ensure using the same inference steps as the loaded model and CFG set to 0.
65
+ pipe("A girl smiling", num_inference_steps=4, guidance_scale=0).images[0].save("output.png")
66
+ ```
67
+
68
+ ### 1-Step UNet
69
+
70
+ ```python
71
+ import torch
72
+ from diffusers import StableDiffusionXLPipeline, EulerDiscreteScheduler
73
+ from huggingface_hub import hf_hub_download
74
+
75
+ base = "stabilityai/stable-diffusion-xl-base-1.0"
76
+ repo = "bytedance/sdxl-lightning"
77
+ ckpt = "sdxl_lightning_1step_unet.pth" # Use the correct ckpt for your step setting!
78
+
79
+ # Load model.
80
+ pipe = StableDiffusionXLPipeline.from_pretrained(base, torch_dtype=torch.float16, variant="fp16").to("cuda")
81
+ pipe.unet.load_state_dict(torch.load(hf_hub_download(repo, ckpt), map_location="cuda"))
82
+
83
+ # Ensure sampler uses "trailing" timesteps and "sample" prediction type.
84
+ pipe.scheduler = EulerDiscreteScheduler.from_config(pipe.scheduler.config, timestep_spacing="trailing", prediction_type="sample")
85
+
86
+ # Ensure using the same inference steps as the loaded model and CFG set to 0.
87
+ pipe("A girl smiling", num_inference_steps=1, guidance_scale=0).images[0].save("output.png")
88
+ ```
89
+
90
+
91
+ ## ComfyUI Usage
92
+
93
+ Please always use the correct checkpoint for the corresponding inference steps.
94
+ Please use Euler sampler with sgm_uniform scheduler.
95
+
96
+ ### 2-Step, 4-Step, 8-Step UNet
97
+
98
+ 1. Download the UNet checkpoint to `/ComfyUI/models/unet`.
99
+ 2. Download our [ComfyUI UNet workflow](comfyui/sdxl_lightning_unet.json).
100
+
101
+ ![SDXL-Lightning ComfyUI UNet Workflow](images/comfyui_unet.png)
102
+
103
+ ### 2-Step, 4-Step, 8-Step LoRA
104
+
105
+ 1. Download the LoRA checkpoint to `/ComfyUI/models/loras`
106
+ 2. Download our [ComfyUI LoRA workflow](comfyui/sdxl_lightning_lora.json).
107
+
108
+ ![SDXL-Lightning ComfyUI UNet Workflow](images/comfyui_lora.png)
109
+
110
+ ### 1-Step UNet
111
+
112
+ ComfyUI does not support changing model formulation to x0-prediction, so it is not usable in ComfyUI yet. Hopefully ComfyUI gets updated soon.
comfyui/sdxl_lightning_lora.json ADDED
@@ -0,0 +1,460 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "last_node_id": 13,
3
+ "last_link_id": 12,
4
+ "nodes": [
5
+ {
6
+ "id": 8,
7
+ "type": "VAEDecode",
8
+ "pos": [
9
+ 1209,
10
+ 188
11
+ ],
12
+ "size": {
13
+ "0": 210,
14
+ "1": 46
15
+ },
16
+ "flags": {},
17
+ "order": 8,
18
+ "mode": 0,
19
+ "inputs": [
20
+ {
21
+ "name": "samples",
22
+ "type": "LATENT",
23
+ "link": 7
24
+ },
25
+ {
26
+ "name": "vae",
27
+ "type": "VAE",
28
+ "link": 8
29
+ }
30
+ ],
31
+ "outputs": [
32
+ {
33
+ "name": "IMAGE",
34
+ "type": "IMAGE",
35
+ "links": [
36
+ 9
37
+ ],
38
+ "slot_index": 0
39
+ }
40
+ ],
41
+ "properties": {
42
+ "Node name for S&R": "VAEDecode"
43
+ }
44
+ },
45
+ {
46
+ "id": 7,
47
+ "type": "CLIPTextEncode",
48
+ "pos": [
49
+ 413,
50
+ 389
51
+ ],
52
+ "size": {
53
+ "0": 425.27801513671875,
54
+ "1": 180.6060791015625
55
+ },
56
+ "flags": {},
57
+ "order": 6,
58
+ "mode": 0,
59
+ "inputs": [
60
+ {
61
+ "name": "clip",
62
+ "type": "CLIP",
63
+ "link": 5
64
+ }
65
+ ],
66
+ "outputs": [
67
+ {
68
+ "name": "CONDITIONING",
69
+ "type": "CONDITIONING",
70
+ "links": [
71
+ 6
72
+ ],
73
+ "slot_index": 0
74
+ }
75
+ ],
76
+ "properties": {
77
+ "Node name for S&R": "CLIPTextEncode"
78
+ },
79
+ "widgets_values": [
80
+ ""
81
+ ]
82
+ },
83
+ {
84
+ "id": 6,
85
+ "type": "CLIPTextEncode",
86
+ "pos": [
87
+ 415,
88
+ 186
89
+ ],
90
+ "size": {
91
+ "0": 422.84503173828125,
92
+ "1": 164.31304931640625
93
+ },
94
+ "flags": {},
95
+ "order": 5,
96
+ "mode": 0,
97
+ "inputs": [
98
+ {
99
+ "name": "clip",
100
+ "type": "CLIP",
101
+ "link": 3
102
+ }
103
+ ],
104
+ "outputs": [
105
+ {
106
+ "name": "CONDITIONING",
107
+ "type": "CONDITIONING",
108
+ "links": [
109
+ 4
110
+ ],
111
+ "slot_index": 0
112
+ }
113
+ ],
114
+ "properties": {
115
+ "Node name for S&R": "CLIPTextEncode"
116
+ },
117
+ "widgets_values": [
118
+ "A girl smiling"
119
+ ]
120
+ },
121
+ {
122
+ "id": 5,
123
+ "type": "EmptyLatentImage",
124
+ "pos": [
125
+ 473,
126
+ 609
127
+ ],
128
+ "size": {
129
+ "0": 315,
130
+ "1": 106
131
+ },
132
+ "flags": {},
133
+ "order": 0,
134
+ "mode": 0,
135
+ "outputs": [
136
+ {
137
+ "name": "LATENT",
138
+ "type": "LATENT",
139
+ "links": [
140
+ 2
141
+ ],
142
+ "slot_index": 0
143
+ }
144
+ ],
145
+ "properties": {
146
+ "Node name for S&R": "EmptyLatentImage"
147
+ },
148
+ "widgets_values": [
149
+ 1024,
150
+ 1024,
151
+ 1
152
+ ]
153
+ },
154
+ {
155
+ "id": 9,
156
+ "type": "SaveImage",
157
+ "pos": [
158
+ 1451,
159
+ 189
160
+ ],
161
+ "size": {
162
+ "0": 210,
163
+ "1": 270
164
+ },
165
+ "flags": {},
166
+ "order": 9,
167
+ "mode": 0,
168
+ "inputs": [
169
+ {
170
+ "name": "images",
171
+ "type": "IMAGE",
172
+ "link": 9
173
+ }
174
+ ],
175
+ "properties": {},
176
+ "widgets_values": [
177
+ "ComfyUI"
178
+ ]
179
+ },
180
+ {
181
+ "id": 4,
182
+ "type": "CheckpointLoaderSimple",
183
+ "pos": [
184
+ 45,
185
+ 192
186
+ ],
187
+ "size": {
188
+ "0": 315,
189
+ "1": 98
190
+ },
191
+ "flags": {},
192
+ "order": 1,
193
+ "mode": 0,
194
+ "outputs": [
195
+ {
196
+ "name": "MODEL",
197
+ "type": "MODEL",
198
+ "links": [
199
+ 11
200
+ ],
201
+ "slot_index": 0
202
+ },
203
+ {
204
+ "name": "CLIP",
205
+ "type": "CLIP",
206
+ "links": [
207
+ 3,
208
+ 5
209
+ ],
210
+ "slot_index": 1
211
+ },
212
+ {
213
+ "name": "VAE",
214
+ "type": "VAE",
215
+ "links": [
216
+ 8
217
+ ],
218
+ "slot_index": 2
219
+ }
220
+ ],
221
+ "properties": {
222
+ "Node name for S&R": "CheckpointLoaderSimple"
223
+ },
224
+ "widgets_values": [
225
+ "sdxl_base_1.0.safetensors"
226
+ ]
227
+ },
228
+ {
229
+ "id": 11,
230
+ "type": "LoraLoaderModelOnly",
231
+ "pos": [
232
+ 43,
233
+ 349
234
+ ],
235
+ "size": {
236
+ "0": 315,
237
+ "1": 82
238
+ },
239
+ "flags": {},
240
+ "order": 4,
241
+ "mode": 0,
242
+ "inputs": [
243
+ {
244
+ "name": "model",
245
+ "type": "MODEL",
246
+ "link": 11
247
+ }
248
+ ],
249
+ "outputs": [
250
+ {
251
+ "name": "MODEL",
252
+ "type": "MODEL",
253
+ "links": [
254
+ 12
255
+ ],
256
+ "shape": 3,
257
+ "slot_index": 0
258
+ }
259
+ ],
260
+ "properties": {
261
+ "Node name for S&R": "LoraLoaderModelOnly"
262
+ },
263
+ "widgets_values": [
264
+ "sdxl_lightning_4step_lora.pth",
265
+ 1
266
+ ]
267
+ },
268
+ {
269
+ "id": 12,
270
+ "type": "Note",
271
+ "pos": [
272
+ 44,
273
+ 71
274
+ ],
275
+ "size": {
276
+ "0": 314.0921630859375,
277
+ "1": 59.37213134765625
278
+ },
279
+ "flags": {},
280
+ "order": 2,
281
+ "mode": 0,
282
+ "properties": {
283
+ "text": ""
284
+ },
285
+ "widgets_values": [
286
+ "Remember to use the correct checkpoint for your inference step setting!"
287
+ ],
288
+ "color": "#432",
289
+ "bgcolor": "#653"
290
+ },
291
+ {
292
+ "id": 13,
293
+ "type": "Note",
294
+ "pos": [
295
+ 861,
296
+ 72
297
+ ],
298
+ "size": {
299
+ "0": 315.6669921875,
300
+ "1": 58
301
+ },
302
+ "flags": {},
303
+ "order": 3,
304
+ "mode": 0,
305
+ "properties": {
306
+ "text": ""
307
+ },
308
+ "widgets_values": [
309
+ "Euler sampler with sgm_uniform is the default."
310
+ ],
311
+ "color": "#432",
312
+ "bgcolor": "#653"
313
+ },
314
+ {
315
+ "id": 3,
316
+ "type": "KSampler",
317
+ "pos": [
318
+ 863,
319
+ 186
320
+ ],
321
+ "size": {
322
+ "0": 315,
323
+ "1": 262
324
+ },
325
+ "flags": {},
326
+ "order": 7,
327
+ "mode": 0,
328
+ "inputs": [
329
+ {
330
+ "name": "model",
331
+ "type": "MODEL",
332
+ "link": 12
333
+ },
334
+ {
335
+ "name": "positive",
336
+ "type": "CONDITIONING",
337
+ "link": 4
338
+ },
339
+ {
340
+ "name": "negative",
341
+ "type": "CONDITIONING",
342
+ "link": 6
343
+ },
344
+ {
345
+ "name": "latent_image",
346
+ "type": "LATENT",
347
+ "link": 2
348
+ }
349
+ ],
350
+ "outputs": [
351
+ {
352
+ "name": "LATENT",
353
+ "type": "LATENT",
354
+ "links": [
355
+ 7
356
+ ],
357
+ "slot_index": 0
358
+ }
359
+ ],
360
+ "properties": {
361
+ "Node name for S&R": "KSampler"
362
+ },
363
+ "widgets_values": [
364
+ 683612750974907,
365
+ "randomize",
366
+ 4,
367
+ 1,
368
+ "euler",
369
+ "sgm_uniform",
370
+ 1
371
+ ]
372
+ }
373
+ ],
374
+ "links": [
375
+ [
376
+ 2,
377
+ 5,
378
+ 0,
379
+ 3,
380
+ 3,
381
+ "LATENT"
382
+ ],
383
+ [
384
+ 3,
385
+ 4,
386
+ 1,
387
+ 6,
388
+ 0,
389
+ "CLIP"
390
+ ],
391
+ [
392
+ 4,
393
+ 6,
394
+ 0,
395
+ 3,
396
+ 1,
397
+ "CONDITIONING"
398
+ ],
399
+ [
400
+ 5,
401
+ 4,
402
+ 1,
403
+ 7,
404
+ 0,
405
+ "CLIP"
406
+ ],
407
+ [
408
+ 6,
409
+ 7,
410
+ 0,
411
+ 3,
412
+ 2,
413
+ "CONDITIONING"
414
+ ],
415
+ [
416
+ 7,
417
+ 3,
418
+ 0,
419
+ 8,
420
+ 0,
421
+ "LATENT"
422
+ ],
423
+ [
424
+ 8,
425
+ 4,
426
+ 2,
427
+ 8,
428
+ 1,
429
+ "VAE"
430
+ ],
431
+ [
432
+ 9,
433
+ 8,
434
+ 0,
435
+ 9,
436
+ 0,
437
+ "IMAGE"
438
+ ],
439
+ [
440
+ 11,
441
+ 4,
442
+ 0,
443
+ 11,
444
+ 0,
445
+ "MODEL"
446
+ ],
447
+ [
448
+ 12,
449
+ 11,
450
+ 0,
451
+ 3,
452
+ 0,
453
+ "MODEL"
454
+ ]
455
+ ],
456
+ "groups": [],
457
+ "config": {},
458
+ "extra": {},
459
+ "version": 0.4
460
+ }
comfyui/sdxl_lightning_unet.json ADDED
@@ -0,0 +1,442 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "last_node_id": 14,
3
+ "last_link_id": 14,
4
+ "nodes": [
5
+ {
6
+ "id": 8,
7
+ "type": "VAEDecode",
8
+ "pos": [
9
+ 1209,
10
+ 188
11
+ ],
12
+ "size": {
13
+ "0": 210,
14
+ "1": 46
15
+ },
16
+ "flags": {},
17
+ "order": 8,
18
+ "mode": 0,
19
+ "inputs": [
20
+ {
21
+ "name": "samples",
22
+ "type": "LATENT",
23
+ "link": 7
24
+ },
25
+ {
26
+ "name": "vae",
27
+ "type": "VAE",
28
+ "link": 8
29
+ }
30
+ ],
31
+ "outputs": [
32
+ {
33
+ "name": "IMAGE",
34
+ "type": "IMAGE",
35
+ "links": [
36
+ 9
37
+ ],
38
+ "slot_index": 0
39
+ }
40
+ ],
41
+ "properties": {
42
+ "Node name for S&R": "VAEDecode"
43
+ }
44
+ },
45
+ {
46
+ "id": 7,
47
+ "type": "CLIPTextEncode",
48
+ "pos": [
49
+ 413,
50
+ 389
51
+ ],
52
+ "size": {
53
+ "0": 425.27801513671875,
54
+ "1": 180.6060791015625
55
+ },
56
+ "flags": {},
57
+ "order": 6,
58
+ "mode": 0,
59
+ "inputs": [
60
+ {
61
+ "name": "clip",
62
+ "type": "CLIP",
63
+ "link": 5
64
+ }
65
+ ],
66
+ "outputs": [
67
+ {
68
+ "name": "CONDITIONING",
69
+ "type": "CONDITIONING",
70
+ "links": [
71
+ 6
72
+ ],
73
+ "slot_index": 0
74
+ }
75
+ ],
76
+ "properties": {
77
+ "Node name for S&R": "CLIPTextEncode"
78
+ },
79
+ "widgets_values": [
80
+ ""
81
+ ]
82
+ },
83
+ {
84
+ "id": 6,
85
+ "type": "CLIPTextEncode",
86
+ "pos": [
87
+ 415,
88
+ 186
89
+ ],
90
+ "size": {
91
+ "0": 422.84503173828125,
92
+ "1": 164.31304931640625
93
+ },
94
+ "flags": {},
95
+ "order": 5,
96
+ "mode": 0,
97
+ "inputs": [
98
+ {
99
+ "name": "clip",
100
+ "type": "CLIP",
101
+ "link": 3
102
+ }
103
+ ],
104
+ "outputs": [
105
+ {
106
+ "name": "CONDITIONING",
107
+ "type": "CONDITIONING",
108
+ "links": [
109
+ 4
110
+ ],
111
+ "slot_index": 0
112
+ }
113
+ ],
114
+ "properties": {
115
+ "Node name for S&R": "CLIPTextEncode"
116
+ },
117
+ "widgets_values": [
118
+ "A girl smiling"
119
+ ]
120
+ },
121
+ {
122
+ "id": 5,
123
+ "type": "EmptyLatentImage",
124
+ "pos": [
125
+ 473,
126
+ 609
127
+ ],
128
+ "size": {
129
+ "0": 315,
130
+ "1": 106
131
+ },
132
+ "flags": {},
133
+ "order": 0,
134
+ "mode": 0,
135
+ "outputs": [
136
+ {
137
+ "name": "LATENT",
138
+ "type": "LATENT",
139
+ "links": [
140
+ 2
141
+ ],
142
+ "slot_index": 0
143
+ }
144
+ ],
145
+ "properties": {
146
+ "Node name for S&R": "EmptyLatentImage"
147
+ },
148
+ "widgets_values": [
149
+ 1024,
150
+ 1024,
151
+ 1
152
+ ]
153
+ },
154
+ {
155
+ "id": 9,
156
+ "type": "SaveImage",
157
+ "pos": [
158
+ 1451,
159
+ 189
160
+ ],
161
+ "size": {
162
+ "0": 210,
163
+ "1": 270
164
+ },
165
+ "flags": {},
166
+ "order": 9,
167
+ "mode": 0,
168
+ "inputs": [
169
+ {
170
+ "name": "images",
171
+ "type": "IMAGE",
172
+ "link": 9
173
+ }
174
+ ],
175
+ "properties": {},
176
+ "widgets_values": [
177
+ "ComfyUI"
178
+ ]
179
+ },
180
+ {
181
+ "id": 4,
182
+ "type": "CheckpointLoaderSimple",
183
+ "pos": [
184
+ 45,
185
+ 192
186
+ ],
187
+ "size": {
188
+ "0": 315,
189
+ "1": 98
190
+ },
191
+ "flags": {},
192
+ "order": 1,
193
+ "mode": 0,
194
+ "outputs": [
195
+ {
196
+ "name": "MODEL",
197
+ "type": "MODEL",
198
+ "links": [],
199
+ "slot_index": 0
200
+ },
201
+ {
202
+ "name": "CLIP",
203
+ "type": "CLIP",
204
+ "links": [
205
+ 3,
206
+ 5
207
+ ],
208
+ "slot_index": 1
209
+ },
210
+ {
211
+ "name": "VAE",
212
+ "type": "VAE",
213
+ "links": [
214
+ 8
215
+ ],
216
+ "slot_index": 2
217
+ }
218
+ ],
219
+ "properties": {
220
+ "Node name for S&R": "CheckpointLoaderSimple"
221
+ },
222
+ "widgets_values": [
223
+ "sdxl_base_1.0.safetensors"
224
+ ]
225
+ },
226
+ {
227
+ "id": 12,
228
+ "type": "Note",
229
+ "pos": [
230
+ 44,
231
+ 71
232
+ ],
233
+ "size": {
234
+ "0": 314.0921630859375,
235
+ "1": 59.37213134765625
236
+ },
237
+ "flags": {},
238
+ "order": 2,
239
+ "mode": 0,
240
+ "properties": {
241
+ "text": ""
242
+ },
243
+ "widgets_values": [
244
+ "Remember to use the correct checkpoint for your inference step setting!"
245
+ ],
246
+ "color": "#432",
247
+ "bgcolor": "#653"
248
+ },
249
+ {
250
+ "id": 13,
251
+ "type": "Note",
252
+ "pos": [
253
+ 861,
254
+ 72
255
+ ],
256
+ "size": {
257
+ "0": 315.6669921875,
258
+ "1": 58
259
+ },
260
+ "flags": {},
261
+ "order": 3,
262
+ "mode": 0,
263
+ "properties": {
264
+ "text": ""
265
+ },
266
+ "widgets_values": [
267
+ "Euler sampler with sgm_uniform is the default."
268
+ ],
269
+ "color": "#432",
270
+ "bgcolor": "#653"
271
+ },
272
+ {
273
+ "id": 14,
274
+ "type": "UNETLoader",
275
+ "pos": [
276
+ 44,
277
+ 344
278
+ ],
279
+ "size": {
280
+ "0": 315,
281
+ "1": 58
282
+ },
283
+ "flags": {},
284
+ "order": 4,
285
+ "mode": 0,
286
+ "outputs": [
287
+ {
288
+ "name": "MODEL",
289
+ "type": "MODEL",
290
+ "links": [
291
+ 14
292
+ ],
293
+ "shape": 3,
294
+ "slot_index": 0
295
+ }
296
+ ],
297
+ "properties": {
298
+ "Node name for S&R": "UNETLoader"
299
+ },
300
+ "widgets_values": [
301
+ "sdxl_lightning_4step_unet.pth"
302
+ ]
303
+ },
304
+ {
305
+ "id": 3,
306
+ "type": "KSampler",
307
+ "pos": [
308
+ 863,
309
+ 186
310
+ ],
311
+ "size": {
312
+ "0": 315,
313
+ "1": 262
314
+ },
315
+ "flags": {},
316
+ "order": 7,
317
+ "mode": 0,
318
+ "inputs": [
319
+ {
320
+ "name": "model",
321
+ "type": "MODEL",
322
+ "link": 14
323
+ },
324
+ {
325
+ "name": "positive",
326
+ "type": "CONDITIONING",
327
+ "link": 4
328
+ },
329
+ {
330
+ "name": "negative",
331
+ "type": "CONDITIONING",
332
+ "link": 6
333
+ },
334
+ {
335
+ "name": "latent_image",
336
+ "type": "LATENT",
337
+ "link": 2
338
+ }
339
+ ],
340
+ "outputs": [
341
+ {
342
+ "name": "LATENT",
343
+ "type": "LATENT",
344
+ "links": [
345
+ 7
346
+ ],
347
+ "slot_index": 0
348
+ }
349
+ ],
350
+ "properties": {
351
+ "Node name for S&R": "KSampler"
352
+ },
353
+ "widgets_values": [
354
+ 43158134645665,
355
+ "randomize",
356
+ 4,
357
+ 1,
358
+ "euler",
359
+ "sgm_uniform",
360
+ 1
361
+ ]
362
+ }
363
+ ],
364
+ "links": [
365
+ [
366
+ 2,
367
+ 5,
368
+ 0,
369
+ 3,
370
+ 3,
371
+ "LATENT"
372
+ ],
373
+ [
374
+ 3,
375
+ 4,
376
+ 1,
377
+ 6,
378
+ 0,
379
+ "CLIP"
380
+ ],
381
+ [
382
+ 4,
383
+ 6,
384
+ 0,
385
+ 3,
386
+ 1,
387
+ "CONDITIONING"
388
+ ],
389
+ [
390
+ 5,
391
+ 4,
392
+ 1,
393
+ 7,
394
+ 0,
395
+ "CLIP"
396
+ ],
397
+ [
398
+ 6,
399
+ 7,
400
+ 0,
401
+ 3,
402
+ 2,
403
+ "CONDITIONING"
404
+ ],
405
+ [
406
+ 7,
407
+ 3,
408
+ 0,
409
+ 8,
410
+ 0,
411
+ "LATENT"
412
+ ],
413
+ [
414
+ 8,
415
+ 4,
416
+ 2,
417
+ 8,
418
+ 1,
419
+ "VAE"
420
+ ],
421
+ [
422
+ 9,
423
+ 8,
424
+ 0,
425
+ 9,
426
+ 0,
427
+ "IMAGE"
428
+ ],
429
+ [
430
+ 14,
431
+ 14,
432
+ 0,
433
+ 3,
434
+ 0,
435
+ "MODEL"
436
+ ]
437
+ ],
438
+ "groups": [],
439
+ "config": {},
440
+ "extra": {},
441
+ "version": 0.4
442
+ }
images/comfyui_lora.png ADDED
images/comfyui_unet.png ADDED
images/intro.jpg ADDED