Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
fireworks-ai
/
FLUX.1-dev-fp8-flumina
like
0
Follow
Fireworks AI
78
Safetensors
License:
flux-1-dev-non-commercial-license
Model card
Files
Files and versions
Community
7a7b2c1
FLUX.1-dev-fp8-flumina
7 contributors
History:
45 commits
aredden
Fix issue where lora alpha is not correct if lora from transformers checkpoint
7a7b2c1
5 months ago
configs
Adding device specific configs & more input image type options + small model spec from args change
5 months ago
modules
Remove f8 flux, instead configure at load, improved quality & corrected configs
5 months ago
.gitignore
125 Bytes
Small fixes & clean up
5 months ago
README.md
18.9 kB
Add lora loading
5 months ago
api.py
907 Bytes
Remove unnecessary synchronize, add more universal seeding & limit if run on windows
5 months ago
float8_quantize.py
17.1 kB
Small fix for issue where f16 CublasLinear layers weren't being used even when available.
5 months ago
flux_emphasis.py
14.1 kB
Remove unnecessary tokenization (still needs work)
5 months ago
flux_pipeline.py
25.8 kB
Add lora loading
5 months ago
image_encoder.py
1.07 kB
Adding device specific configs & more input image type options + small model spec from args change
5 months ago
lora_loading.py
16 kB
Fix issue where lora alpha is not correct if lora from transformers checkpoint
5 months ago
main.py
6.29 kB
Add quantize embedders/modulation to argparse options
5 months ago
main_gr.py
4.28 kB
Small fixes & clean up
5 months ago
requirements.txt
245 Bytes
Add uvicorn requirement
5 months ago
util.py
10.5 kB
Add quantize embedders/modulation to argparse options
5 months ago