onurxtasar
commited on
Commit
•
12c5a4d
1
Parent(s):
b6ddb55
Update README.md
Browse files- Added training details & metrics.
- Added Eyal's name.
README.md
CHANGED
@@ -10,7 +10,7 @@ inference: False
|
|
10 |
# ⚡ FlashDiffusion: FlashPixart ⚡
|
11 |
|
12 |
|
13 |
-
Flash Diffusion is a diffusion distillation method proposed in [ADD ARXIV]() *by Clément Chadebec, Onur Tasar and Benjamin Aubin.*
|
14 |
This model is a **66.5M** LoRA distilled version of Pixart-α model that is able to generate 1024x1024 images in **4 steps**. See our [live demo](https://huggingface.co/spaces/jasperai/FlashPixart).
|
15 |
|
16 |
|
@@ -63,8 +63,11 @@ image = pipe(prompt, num_inference_steps=4, guidance_scale=0).images[0]
|
|
63 |
</p>
|
64 |
|
65 |
# Training Details
|
66 |
-
The model was trained for 40k iterations on 4 H100 GPUs. Please refer to the [paper]() for further parameters details.
|
67 |
|
|
|
|
|
|
|
68 |
|
69 |
## License
|
70 |
This model is released under the the Creative Commons BY-NC license.
|
|
|
10 |
# ⚡ FlashDiffusion: FlashPixart ⚡
|
11 |
|
12 |
|
13 |
+
Flash Diffusion is a diffusion distillation method proposed in [ADD ARXIV]() *by Clément Chadebec, Onur Tasar, Eyal Benaroche, and Benjamin Aubin.*
|
14 |
This model is a **66.5M** LoRA distilled version of Pixart-α model that is able to generate 1024x1024 images in **4 steps**. See our [live demo](https://huggingface.co/spaces/jasperai/FlashPixart).
|
15 |
|
16 |
|
|
|
63 |
</p>
|
64 |
|
65 |
# Training Details
|
66 |
+
The model was trained for 40k iterations on 4 H100 GPUs (representing approximately 188 hours of training). Please refer to the [paper]() for further parameters details.
|
67 |
|
68 |
+
**Metrics on COCO 2014 validation (Table 4)**
|
69 |
+
- FID-10k: 29.30 (4 NFE)
|
70 |
+
- CLIP Score: 0.303 (4 NFE)
|
71 |
|
72 |
## License
|
73 |
This model is released under the the Creative Commons BY-NC license.
|