GGUF Q8_0 quant
#8
by
SporkySporkness
- opened
I have been trying to quantize the model to Q8_0, because the base Flux.1 dev Q8_0 works very well, giving nearly identical results to fp16.
However, I have never quantized before, and I did not succeed with AWPortrait-FL. Could you upload a quantized version?
Thank you so much
Let's take a look.
Here is a tutorial how to convert to GGUF verison.
I've tried ComfyUI-GGUF and stable-diffusion.cpp, but am still unable to make it to the end :(
Update: I've finally managed to get it working!
GGUF quants available at https://huggingface.co/SporkySporkness/AWPortrait-FL-GGUF/
Thank you!