|
--- |
|
license: mit |
|
tags: |
|
- stable-diffusion |
|
- stable-diffusion-diffusers |
|
inference: false |
|
--- |
|
# SDXL-VAE-FP16-Fix |
|
|
|
SDXL-VAE-FP16-Fix is the [SDXL VAE](https://huggingface.co/stabilityai/sdxl-vae), but modified to run in fp16 precision without generating NaNs. |
|
|
|
```python |
|
from diffusers import DiffusionPipeline, AutoencoderKL |
|
import torch |
|
|
|
pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-0.9", torch_dtype=torch.float16, variant="fp16").to("cuda") |
|
fixed_vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix").half().to("cuda") |
|
|
|
prompt = "An astronaut riding a green horse" |
|
latents = pipe(prompt=prompt, output_type="latent").images |
|
|
|
for vae in (pipe.vae, fixed_vae): |
|
for dtype in (torch.float32, torch.float16): |
|
with torch.no_grad(), torch.cuda.amp.autocast(dtype=torch.float16, enabled=(dtype==torch.float16)): |
|
print(dtype, "sdxl-vae" if vae == pipe.vae else "sdxl-vae-fp16-fix") |
|
display(pipe.image_processor.postprocess(vae.decode(latents / vae.config.scaling_factor).sample)[0]) |
|
``` |
|
|
|
| VAE | Decoding in `float32` precision | Decoding in `float16` precision | |
|
| --------------------- | ------------------------------- | ------------------------------- | |
|
| SDXL-VAE | ✅ ![](./images/orig-fp32.png) | ⚠️ ![](./images/orig-fp16.png) | |
|
| SDXL-VAE-FP16-Fix | ✅ ![](./images/fix-fp32.png) | ✅ ![](./images/fix-fp16.png) | |
|
|