clementchadebec
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -10,7 +10,8 @@ inference: False
|
|
10 |
# ⚡ Flash Diffusion: FlashPixart ⚡
|
11 |
|
12 |
|
13 |
-
Flash Diffusion is a diffusion distillation method proposed in [
|
|
|
14 |
This model is a **66.5M** LoRA distilled version of Pixart-α model that is able to generate 1024x1024 images in **4 steps**. See our [live demo](https://huggingface.co/spaces/jasperai/FlashPixart).
|
15 |
|
16 |
|
@@ -63,7 +64,7 @@ image = pipe(prompt, num_inference_steps=4, guidance_scale=0).images[0]
|
|
63 |
</p>
|
64 |
|
65 |
# Training Details
|
66 |
-
The model was trained for 40k iterations on 4 H100 GPUs (representing approximately 188 hours of training). Please refer to the [paper]() for further parameters details.
|
67 |
|
68 |
**Metrics on COCO 2014 validation (Table 4)**
|
69 |
- FID-10k: 29.30 (4 NFE)
|
|
|
10 |
# ⚡ Flash Diffusion: FlashPixart ⚡
|
11 |
|
12 |
|
13 |
+
Flash Diffusion is a diffusion distillation method proposed in [Flash Diffusion: Accelerating Any Conditional
|
14 |
+
Diffusion Model for Few Steps Image Generation](http://arxiv.org/abs/2406.02347) *by Clément Chadebec, Onur Tasar, Eyal Benaroche, and Benjamin Aubin.*
|
15 |
This model is a **66.5M** LoRA distilled version of Pixart-α model that is able to generate 1024x1024 images in **4 steps**. See our [live demo](https://huggingface.co/spaces/jasperai/FlashPixart).
|
16 |
|
17 |
|
|
|
64 |
</p>
|
65 |
|
66 |
# Training Details
|
67 |
+
The model was trained for 40k iterations on 4 H100 GPUs (representing approximately 188 hours of training). Please refer to the [paper](http://arxiv.org/abs/2406.02347) for further parameters details.
|
68 |
|
69 |
**Metrics on COCO 2014 validation (Table 4)**
|
70 |
- FID-10k: 29.30 (4 NFE)
|