Update README.md
Browse files
README.md
CHANGED
@@ -19,14 +19,57 @@ base_model: stabilityai/stable-diffusion-xl-base-1.0
|
|
19 |
instance_prompt: null
|
20 |
|
21 |
---
|
22 |
-
# LCM LoRA SDXL Rank 1
|
23 |
|
24 |
-
|
25 |
|
|
|
26 |
|
|
|
27 |
|
28 |
## Download model
|
29 |
|
30 |
Weights for this model are available in Safetensors format.
|
31 |
|
32 |
[Download](/Linaqruf/lcm-lora-sdxl-rank1/tree/main) them in the Files & versions tab.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
19 |
instance_prompt: null
|
20 |
|
21 |
---
|
|
|
22 |
|
23 |
+
# LCM LoRA SDXL Rank 1
|
24 |
|
25 |
+
LCM LoRA SDXL Rank 1 is a resized [LCM LoRA SDXL](https://huggingface.co/latent-consistency/lcm-lora-sdxl). The LoRA resized to rank 1 with [resize lora](https://github.com/kohya-ss/sd-scripts/blob/main/networks/resize_lora.py) script. This LoRA still can do inference with `LCMScheduler` and maintain the inference speed with lower steps and guidance scale while the output is improved.
|
26 |
|
27 |
+
<Gallery />
|
28 |
|
29 |
## Download model
|
30 |
|
31 |
Weights for this model are available in Safetensors format.
|
32 |
|
33 |
[Download](/Linaqruf/lcm-lora-sdxl-rank1/tree/main) them in the Files & versions tab.
|
34 |
+
|
35 |
+
## Usage
|
36 |
+
|
37 |
+
LCM-LoRA is supported in 🤗 Hugging Face Diffusers library from version v0.23.0 onwards. To run the model, first
|
38 |
+
install the latest version of the Diffusers library as well as `peft`, `accelerate` and `transformers`.
|
39 |
+
audio dataset from the Hugging Face Hub:
|
40 |
+
|
41 |
+
```bash
|
42 |
+
pip install --upgrade diffusers transformers accelerate peft
|
43 |
+
```
|
44 |
+
|
45 |
+
### Text-to-Image
|
46 |
+
|
47 |
+
The adapter can be loaded with it's base model `stabilityai/stable-diffusion-xl-base-1.0`. Next, the scheduler needs to be changed to [`LCMScheduler`](https://huggingface.co/docs/diffusers/v0.22.3/en/api/schedulers/lcm#diffusers.LCMScheduler) and we can reduce the number of inference steps to just 2 to 8 steps.
|
48 |
+
Please make sure to either disable `guidance_scale` or use values between 1.0 and 2.0.
|
49 |
+
|
50 |
+
```python
|
51 |
+
import torch
|
52 |
+
from diffusers import LCMScheduler, AutoPipelineForText2Image
|
53 |
+
|
54 |
+
model_id = "stabilityai/stable-diffusion-xl-base-1.0"
|
55 |
+
adapter_id = "Linaqruf/lcm-lora-sdxl-rank1"
|
56 |
+
|
57 |
+
pipe = AutoPipelineForText2Image.from_pretrained(model_id, torch_dtype=torch.float16, variant="fp16")
|
58 |
+
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)
|
59 |
+
pipe.to("cuda")
|
60 |
+
|
61 |
+
# load and fuse lcm lora
|
62 |
+
pipe.load_lora_weights(adapter_id)
|
63 |
+
pipe.fuse_lora()
|
64 |
+
|
65 |
+
prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k"
|
66 |
+
|
67 |
+
# disable guidance_scale by passing 0
|
68 |
+
image = pipe(prompt=prompt, num_inference_steps=4, guidance_scale=0).images[0]
|
69 |
+
```
|
70 |
+
|
71 |
+
![](./image.png)
|
72 |
+
|
73 |
+
## Acknowledgement
|
74 |
+
- https://twitter.com/2vXpSwA7/status/1726706470732091667
|
75 |
+
|