This is a copy of the SSD-1B model (https://huggingface.co/segmind/SSD-1B) with the unet replaced with the LCM distilled unet (https://huggingface.co/latent-consistency/lcm-ssd-1b) and scheduler config set to default to the LCM Scheduler.
This makes LCM SSD-1B run as a standard Diffusion Pipeline
from diffusers import DiffusionPipeline
import torch
pipe = DiffusionPipeline.from_pretrained(
"Vargol/lcm-ssd-1b-full-model", variant='fp16', torch_dtype=torch.float16
).to("mps")
prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k"
generator = torch.manual_seed(0)
image = pipe(
prompt=prompt, num_inference_steps=4, generator=generator, guidance_scale=8.0
).images[0]
image.save('distilled.png')
- Downloads last month
- 12
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.