File size: 1,909 Bytes
2139044 40907d8 2139044 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 |
---
license: cc-by-nc-4.0
library_name: diffusers
base_model: PixArt-alpha/PixArt-XL-2-1024-MS
tags:
- lora
- text-to-image
inference: False
---
# ⚡ FlashDiffusion: FlashPixart ⚡
Flash Diffusion is a diffusion distillation method proposed in [ADD ARXIV]() *by Clément Chadebec, Onur Tasar and Benjamin Aubin.*
This model is a **66.5M** LoRA distilled version of Pixart-α model that is able to generate 1024x1024 images in **4 steps**. See our [live demo](https://huggingface.co/spaces/jasperai/FlashPixart).
<p align="center">
<img style="width:700px;" src="images/hf_grid.png">
</p>
# How to use?
The model can be used using the `StableDiffusionPipeline` from `diffusers` library directly. It can allow reducing the number of required sampling steps to **2-4 steps**.
```python
import torch
from diffusers import PixArtAlphaPipeline, Transformer2DModel, LCMScheduler
from peft import PeftModel
# Load LoRA
transformer = Transformer2DModel.from_pretrained(
"PixArt-alpha/PixArt-XL-2-1024-MS",
subfolder="transformer",
torch_dtype=torch.float16
)
transformer = PeftModel.from_pretrained(
transformer,
"jasperai/flash-pixart"
)
# Pipeline
pipe = PixArtAlphaPipeline.from_pretrained(
"PixArt-alpha/PixArt-XL-2-1024-MS",
transformer=transformer,
torch_dtype=torch.float16
)
# Scheduler
pipe.scheduler = LCMScheduler.from_pretrained(
"PixArt-alpha/PixArt-XL-2-1024-MS",
subfolder="scheduler",
timestep_spacing="trailing",
)
pipe.to("cuda")
prompt = "A raccoon reading a book in a lush forest."
image = pipe(prompt, num_inference_steps=4, guidance_scale=0).images[0]
```
<p align="center">
<img style="width:400px;" src="images/raccoon.png">
</p>
# Training Details
The model was trained for 40k iterations on 4 H100 GPUs. Please refer to the [paper]() for further parameters details.
## License
This model is released under the the Creative Commons BY-NC license. |