lcm-lora-sdxl-rank1 / README.md
Linaqruf's picture
Update README.md
a583fb1
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: steps 4 scale 1
output:
url: images/F_iezcTbcAAvz8t.jpg
- text: steps 6 scale 2
output:
url: images/F_ifIM0acAAe1ln.jpg
- text: steps 8 scale 2
output:
url: images/F_ifP0yaAAA8hTQ.jpg
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: null
---
# LCM LoRA SDXL Rank 1
LCM LoRA SDXL Rank 1 is a resized [LCM LoRA SDXL](https://huggingface.co/latent-consistency/lcm-lora-sdxl). The LoRA resized to rank 1 with [resize lora](https://github.com/kohya-ss/sd-scripts/blob/main/networks/resize_lora.py) script. This LoRA still can do inference with `LCMScheduler` and maintain the inference speed with lower steps and guidance scale while the output is improved.
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/Linaqruf/lcm-lora-sdxl-rank1/tree/main) them in the Files & versions tab.
## Usage
LCM-LoRA is supported in 🤗 Hugging Face Diffusers library from version v0.23.0 onwards. To run the model, first
install the latest version of the Diffusers library as well as `peft`, `accelerate` and `transformers`.
audio dataset from the Hugging Face Hub:
```bash
pip install --upgrade diffusers transformers accelerate peft
```
### Text-to-Image
The adapter can be loaded with it's base model `stabilityai/stable-diffusion-xl-base-1.0`. Next, the scheduler needs to be changed to [`LCMScheduler`](https://huggingface.co/docs/diffusers/v0.22.3/en/api/schedulers/lcm#diffusers.LCMScheduler) and we can reduce the number of inference steps to just 2 to 8 steps.
Please make sure to either disable `guidance_scale` or use values between 1.0 and 2.0.
```python
import torch
from diffusers import LCMScheduler, AutoPipelineForText2Image
model_id = "stabilityai/stable-diffusion-xl-base-1.0"
adapter_id = "Linaqruf/lcm-lora-sdxl-rank1"
pipe = AutoPipelineForText2Image.from_pretrained(model_id, torch_dtype=torch.float16, variant="fp16")
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)
pipe.to("cuda")
# load and fuse lcm lora
pipe.load_lora_weights(adapter_id)
pipe.fuse_lora()
prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k"
# disable guidance_scale by passing 0
image = pipe(prompt=prompt, num_inference_steps=4, guidance_scale=0).images[0]
```
![](./image.png)
## Acknowledgement
- https://twitter.com/2vXpSwA7/status/1726706470732091667