Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
fireworks-ai
/
FLUX.1-dev-fp8-flumina
like
0
Follow
Fireworks AI
75
Safetensors
License:
flux-1-dev-non-commercial-license
Model card
Files
Files and versions
Community
d316f04
FLUX.1-dev-fp8-flumina
7 contributors
History:
54 commits
flowpoint
add benchmarks numbers for rtx4000ada (non-sff)
d316f04
4 months ago
configs
Adding device specific configs & more input image type options + small model spec from args change
5 months ago
modules
Improved precision / reduced frequency of nan outputs, allow bf16 t5, f32 rmsnorm, larger clamp
4 months ago
.gitignore
Safe
125 Bytes
Small fixes & clean up
5 months ago
LICENSE
Safe
11.3 kB
Create LICENSE
4 months ago
README.md
Safe
20.7 kB
add benchmarks numbers for rtx4000ada (non-sff)
4 months ago
api.py
Safe
907 Bytes
Remove unnecessary synchronize, add more universal seeding & limit if run on windows
5 months ago
float8_quantize.py
Safe
18.7 kB
remove torchao dependency, quantize entirely via linear
4 months ago
flux_emphasis.py
Safe
14.1 kB
Remove unnecessary tokenization (still needs work)
5 months ago
flux_pipeline.py
Safe
25.8 kB
Add lora loading
5 months ago
image_encoder.py
Safe
1.07 kB
Adding device specific configs & more input image type options + small model spec from args change
5 months ago
lora_loading.py
Safe
16 kB
Fix issue where lora alpha is not correct if lora from transformers checkpoint
4 months ago
main.py
Safe
6.29 kB
Add quantize embedders/modulation to argparse options
5 months ago
main_gr.py
Safe
4.28 kB
Small fixes & clean up
5 months ago
requirements.txt
Safe
202 Bytes
remove torchao dependency, quantize entirely via linear
4 months ago
util.py
10.6 kB
Improved precision / reduced frequency of nan outputs, allow bf16 t5, f32 rmsnorm, larger clamp
4 months ago