metadata
license: openrail++
tags:
- stable-diffusion
- text-to-image
SD v2.1-base with Self-Perceptual Objective
This is the official model in Diffusion Model with Perceptual Loss paper.
This model is trained using the self-perceptual objective. It no longer needs classifier-free guidance to produce sensible images.
This model is trained using zero terminal SNR schedule following Common Diffusion Noise Schedules and Sample Steps are Flawed paper on LAION aesthetic 6+ data.
This model is finetuned from stabilityai/stable-diffusion-2-1-base.
This model is meant for research demonstration, not for production use.
Usage
from diffusers import StableDiffusionPipeline
prompt = "A young girl smiling"
pipe = StableDiffusionPipeline.from_pretrained("ByteDance/sd2.1-base-zsnr-laionaes6-perceptual").to("cuda")
pipe(prompt, guidance_scale=0).images[0].save("out.jpg") # No need for CFG!
Related Models
- bytedance/sd2.1-base-zsnr-laionaes5
- bytedance/sd2.1-base-zsnr-laionaes6
- bytedance/sd2.1-base-zsnr-laionaes6-perceptual
Cite as
@misc{lin2024diffusion,
title={Diffusion Model with Perceptual Loss},
author={Shanchuan Lin and Xiao Yang},
year={2024},
eprint={2401.00110},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
@misc{lin2023common,
title={Common Diffusion Noise Schedules and Sample Steps are Flawed},
author={Shanchuan Lin and Bingchen Liu and Jiashi Li and Xiao Yang},
year={2023},
eprint={2305.08891},
archivePrefix={arXiv},
primaryClass={cs.CV}
}