Sub-path Linear Approximation Model (SLAM): DreamShaperV7

Paper: https://arxiv.org/abs/2404.13903
Project Page: https://subpath-linear-approx-model.github.io/
The checkpoint is a distilled from https://huggingface.co/Lykon/dreamshaper-7 with our proposed Sub-path Linear Approximation Model, which reduces the number of inference steps to only between 2-4 steps.

Usage

First, install the latest version of the Diffusers library as well as peft, accelerate and transformers.

pip install --upgrade pip
pip install --upgrade diffusers transformers accelerate peft

We implement SLAM to be compatible with LCMScheduler. You can use SLAM just like you use LCM, with guidance_scale set to 1 constantly.

from diffusers import DiffusionPipeline
import torch

pipe = DiffusionPipeline.from_pretrained("alimama-creative/slam-dreamshaper7")

# To save GPU memory, torch.float16 can be used, but it may compromise image quality.
pipe.to(torch_device="cuda", torch_dtype=torch.float16)

prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k"

num_inference_steps = 4

images = pipe(prompt=prompt, num_inference_steps=num_inference_steps, guidance_scale=1, lcm_origin_steps=50, output_type="pil").images

slam-dreamshaper.png

Downloads last month
38
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model authors have turned it off explicitly.

Collection including alimama-creative/slam-dreamshaper7