slam-dreamshaper7 / README.md
alimamacv's picture
update
7abee0a
|
raw
history blame
1.97 kB
---
library_name: diffusers
tags:
- text-to-image
license: apache-2.0
inference: false
---
# Sub-path Linear Approximation Model (SLAM): DreamShaperV7
Paper: [https://arxiv.org/abs/2404.13903](https://arxiv.org/abs/2404.13903)<br/>
Project Page: [https://subpath-linear-approx-model.github.io/](https://subpath-linear-approx-model.github.io/)<br/>
The checkpoint is a distilled from [https://huggingface.co/Lykon/dreamshaper-7](https://huggingface.co/Lykon/dreamshaper-7) with our proposed Sub-path Linear Approximation Model, which reduces the number of inference steps to only between 2-4 steps.
## Usage
First, install the latest version of the Diffusers library as well as peft, accelerate and transformers.
```bash
pip install --upgrade pip
pip install --upgrade diffusers transformers accelerate peft
```
We implement SLAM to be compatible with [LCMScheduler](https://huggingface.co/docs/diffusers/v0.22.3/en/api/schedulers/lcm#diffusers.LCMScheduler). You can use SLAM just like you use LCM, with guidance_scale set to 1 constantly.
```python
from diffusers import DiffusionPipeline
import torch
pipe = DiffusionPipeline.from_pretrained("alimama-creative/slam-dreamshaper7")
# To save GPU memory, torch.float16 can be used, but it may compromise image quality.
pipe.to(torch_device="cuda", torch_dtype=torch.float16)
prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k"
num_inference_steps = 4
images = pipe(prompt=prompt, num_inference_steps=num_inference_steps, guidance_scale=1, lcm_origin_steps=50, output_type="pil").images
```
![slam-dreamshaper.png](https://intranetproxy.alipay.com/skylark/lark/0/2024/png/102756509/1714305398411-74a8dd57-a933-42d6-bc43-2e88bce18130.png#clientId=uaea4a13b-3c46-4&from=ui&height=355&id=uc8945fda&originHeight=512&originWidth=512&originalType=binary&ratio=2&rotation=0&showTitle=false&size=386147&status=done&style=none&taskId=ubb40de33-2d75-4880-bb35-546b916b5c5&title=&width=355)