|
--- |
|
base_model: black-forest-labs/FLUX.1-dev |
|
--- |
|
|
|
*Note that all these models are derivatives of black-forest-labs/FLUX.1-dev and therefore covered by the |
|
[FLUX.1 [dev] Non-Commercial License](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md) license.* |
|
|
|
*Some models are derivatives of finetunes, and are included with the permission of the finetuner* |
|
|
|
# Optimised Flux GGUF models |
|
|
|
A collection of GGUF models using mixed quantization (different layers quantized to different precision to optimise fidelity v. memory), |
|
created using [mixed gguf converter](https://github.com/chrisgoringe/mixed-gguf-converter). |
|
|
|
They can be loaded in ComfyUI using the [ComfyUI GGUF Nodes](https://github.com/city96/ComfyUI-GGUF). Just put the gguf files in your |
|
models/unet directory. |
|
|
|
## Naming convention (mx for 'mixed') |
|
|
|
[original_model_name]_mxN_N.gguf |
|
|
|
where N_N is the average number of bits per parameter. |
|
|
|
## Good choices to start with |
|
``` |
|
- 3_1 is the smallest yet - might work on 6 GB? |
|
- 3_8 might work on a 8 GB card |
|
- 6_9 should be good for a 12 GB card |
|
- 8_2 is a good choice for 16 GB cards if you want to add LoRAs etc |
|
- 9_2 fits on a 16 GB card |
|
``` |
|
|
|
## Speed? |
|
|
|
On an A40 (plenty of VRAM), everything except the model identical, |
|
the time taken to generate an image (30 steps, deis sampler) was about 65% longer than for the full model (45s v 27s). |
|
|
|
Quantised models will generally be slower because the weights have to be converted back into a native torch form when they are needed. |
|
|
|
## How are these 'optimised'? |
|
|
|
The optimization is based on a cost metric, representing the error introduced by quantizing a specified layer with a specified quant. |
|
The data can be found [here](https://github.com/chrisgoringe/mixed-gguf-converter/tree/main/costs), and details of the process are below. |
|
|
|
From this, any possible quantization can be given a cost and a benefit (bits saved). The possible quantizations are then sorted from |
|
best (benefit/cost) to worst, and applied in order, until the required number of bits have been removed. |
|
|
|
### Calculating costs |
|
|
|
I created a database of the hidden states at the start and end of the transformer stack as follows: |
|
- 240 prompts used for flux images popular at civit.ai were run through the full Flux.1-dev model with randomised resolution and step count. |
|
- For a randomly selected step in the inference, the hidden states before and after the layer stack were captured. |
|
|
|
To calculate the cost of quantizing a specific layer to a specific quant: |
|
- A single layer in the transformer stack was quantized |
|
- The 240 initial hidden states were run through the stack |
|
- The cost is defined as the mean square difference between the outputs of the modified stack and the unmodified stack |
|
|
|
The cost, therefore, is a measure of how much change is introduced into the output hidden states by the quantization. |
|
|
|
## Not quantized |
|
|
|
In all these models, the 'in' blocks, the final layer blocks, and all normalization scale parameters are not quantized. |
|
These represent of 0.54% of all parameters in the model. |
|
|
|
In patch models (where the states were quantised using llama.cpp code), the biases are also not quantized. |
|
These represent 0.03% of all parameters in the model. |
|
|