|
--- |
|
license: other |
|
language: |
|
- en |
|
pipeline_tag: text-to-image |
|
tags: |
|
- flux |
|
- flux.1 |
|
- flux.1-schnell |
|
- flux.1-dev |
|
- flux-merge |
|
- merge |
|
- blocks |
|
- finetune |
|
- block patcher |
|
library_name: diffusers |
|
--- |
|
|
|
# Brief introduction: |
|
|
|
[Also on CivitAI](https://civitai.com/models/941929) |
|
|
|
**洗去蒸馏油腻,回归模型本真。** |
|
|
|
**Wash away the distillation and return to the original basic.** |
|
|
|
**可能是目前基于 Flux.1 Schnell 调制的各种模型中,快速出图(4-8步),遵循原版 Flux Schnell 构图风格,提示词还原能力强,且在出图质量、出图细节、回归真实和风格多样化方面取得最佳平衡的开源可商用 Schnell 基础模型。** |
|
|
|
**Only 4 step, The Model may achieve to the best balance in terms of image quality, details, reality, and style diversity compare with other tuned of Flux.1 Schnell. and have a good ability of prompt following, good of the original Flux model style following.** |
|
|
|
Based on [**FLUX.1-schnell**](https://huggingface.co/black-forest-labs/FLUX.1-schnell), Merge of [**LibreFLUX**](https://huggingface.co/jimmycarter/LibreFLUX), finetuned by [**ComfyUI**](https://github.com/comfyanonymous/ComfyUI), [**Block_Patcher_ComfyUI**](https://github.com/cubiq/Block_Patcher_ComfyUI), [**ComfyUI_essentials**](https://github.com/cubiq/ComfyUI_essentials) and other tools. Recommended 4-8 steps, usually step 4 is OK. Greatly improved quality and reality compare to other Flux.1 Schnell model. |
|
|
|
![](./compare-schnell.jpg) |
|
|
|
================================================================================ |
|
|
|
**可能是目前快速出图(10步以内)的 Flux 微调模型中,遵循原版 Flux.1 Dev 风格,提示词还原能力强、出图质量最好、出图细节超越 Flux.1 Dev 模型,最接近 Flux.1 Pro 的基础模型。** |
|
|
|
**May be the Best Quality Step 6-10 Model, In some details, it surpasses the Flux.1 Dev model and approaches the Flux.1 Pro model. and have good ability of prompt following, good of the original Flux.1 Dev style following.** |
|
|
|
Based on **[Flux-Fusion-V2](https://huggingface.co/Anibaaal/Flux-Fusion-V2-4step-merge-gguf-nf4/tree/main)**, Merge of **[flux-dev-de-distill](https://huggingface.co/nyanko7/flux-dev-de-distill/tree/main)**, finetuned by **[ComfyUI](https://github.com/comfyanonymous/ComfyUI)**, **[Block_Patcher_ComfyUI](https://github.com/cubiq/Block_Patcher_ComfyUI)**, **[ComfyUI_essentials](https://github.com/cubiq/ComfyUI_essentials)** and other tools. |
|
Recommended 6-10 steps. Greatly improved quality compared to other Flux.1 model. |
|
|
|
![](./compare.jpg) |
|
|
|
================================================================================ |
|
|
|
GGUF Q8_0 / Q5_1 /Q4_1 量化版本模型文件,经过测试,已同步提供,将不会再提供别的量化版本,如有需要,朋友们可根据下面提示信息,自己下载 fp8 后进行量化。 |
|
|
|
GGUF Q8_0 / Q5_1 /Q4_1 quantized model file, had tested, and uploaded the same time, over-quantization will lose the advantages of this high-speed and high-precision model, so no other quantization will be provided, you can download the FP8 model file and quantizate it according to the following tips. |
|
|
|
# Recommend: |
|
|
|
**UNET versions** (Model only) need Text Encoders and VAE, I recommend use below CLIP and Text Encoder model, will get better prompt guidance: |
|
|
|
- Long CLIP: https://huggingface.co/zer0int/CLIP-GmP-ViT-L-14/blob/main/ViT-L-14-TEXT-detail-improved-hiT-GmP-TE-only-HF.safetensors |
|
- Text Encoders: https://huggingface.co/silveroxides/CLIP-Collection/blob/main/t5xxl_flan_latest-fp8_e4m3fn.safetensors |
|
- VAE: https://huggingface.co/black-forest-labs/FLUX.1-schnell/tree/main/vae |
|
- GGUF Version: you need install GGUF model support nodes, https://github.com/city96/ComfyUI-GGUF |
|
|
|
**Sample workflow**: a very simple workflow as below, needn't any other comfy custom nodes(For GGUF version, please use UNET Loader(GGUF) node of city96's): |
|
|
|
![](./workflow.png) |
|
|
|
# Thanks for: |
|
|
|
https://huggingface.co/black-forest-labs/FLUX.1-dev, A very good open source T2I model. under the FLUX.1 [dev] Non-Commercial License. |
|
|
|
https://huggingface.co/black-forest-labs/FLUX.1-schnell, A very good open source T2I model, under the apache-2.0 licence. |
|
|
|
https://huggingface.co/Anibaaal, Flux-Fusion is a very good mix and tuned model. |
|
|
|
https://huggingface.co/nyanko7, Flux-dev-de-distill is a great experimental project! thanks for the [inference.py](https://huggingface.co/nyanko7/flux-dev-de-distill/blob/main/inference.py) scripts. |
|
|
|
https://huggingface.co/jimmycarter/LibreFLUX, A free, de-distilled FLUX model, is an Apache 2.0 version of FLUX.1-schnell. |
|
|
|
https://huggingface.co/MonsterMMORPG, Furkan share a lot of Flux.1 model testing and tuning courses, some special test for the de-distill model. |
|
|
|
https://github.com/cubiq/Block_Patcher_ComfyUI, cubiq's Flux blocks patcher sampler let me do a lot of test to know how the Flux.1 block parameter value change the image gerentrating. His [ComfyUI_essentials](https://github.com/cubiq/ComfyUI_essentials) have a FluxBlocksBuster node, let me can adjust the blocks value easy. that is a great work! |
|
|
|
https://huggingface.co/twodgirl, Share the model quantization script and the test dataset. |
|
|
|
https://huggingface.co/John6666, Share the model convert script and the model collections. |
|
|
|
https://github.com/city96/ComfyUI-GGUF, Native support GGUF Quantization Model. |
|
|
|
https://github.com/leejet/stable-diffusion.cpp, Provider pure C/C++ GGUF model convert scripts. |
|
|
|
Attn: For easy convert to GGUF Q5/Q4, you can use https://github.com/ruSauron/to-gguf-bat script, download it and put to the same directory with sd.exe file, then just pull my fp8.safetensors model file to bat file in exploer, will pop a CMD windows, and follow the menu to conver the one you want. |
|
|
|
## LICENSE |
|
|
|
The weights fall under the [FLUX.1 [dev]](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md) Non-Commercial License. |