wikeeyang commited on
Commit
2bf910e
1 Parent(s): 3f34263

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +23 -11
README.md CHANGED
@@ -20,6 +20,20 @@ library_name: diffusers
20
 
21
  [Also on CivitAI](https://civitai.com/models/941929)
22
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
23
  **可能是目前快速出图(10步以内)的 Flux 微调模型中,遵循原版 Flux.1 Dev 风格,提示词还原能力强、出图质量最好、出图细节超越 Flux.1 Dev 模型,最接近 Flux.1 Pro 的基础模型。**
24
 
25
  **May be the Best Quality Step 6-10 Model, In some details, it surpasses the Flux.1 Dev model and approaches the Flux.1 Pro model. and have good ability of prompt following, good of the original Flux.1 Dev style following.**
@@ -29,19 +43,11 @@ Recommended 6-10 steps. Greatly improved quality compared to other Flux.1 model.
29
 
30
  ![](./compare.jpg)
31
 
32
- GGUF Q8_0 量化版本模型文件,经过几天的测试,没问题。其出图质量与 fp8 相同,在某些细节体现方面,略优于 fp8 格式,GGUF 模型文件加载方式和模型转换方法,已在下面附加描述。另:应网友要求,已转换和上传了 GGUF Q4_1 格式模型,与 Q8_0 的对比测试情况如下图所示:
33
 
34
- GGUF Q8_0 quantized model file, had tested, no problem. The quality of the image is the same as fp8 format, and in some details, it is slightly better than the fp8 format, and the loading method of GGUF model and conversion method are described below. Another: At the request, the GGUF Q4_1 format model had uploded, and the test result compare with Q8_0 show as follows:
35
 
36
- ![](./Q8_0&Q4_1.jpg)
37
-
38
- 初步测试结果看:Q4_1 模型在人物等较大对象的输出时,表现较好,但在需要体现更多细节时,发现细节丢失较多,因此,建议网友们如果硬件允许,尽量选择 fp8/Q8_0 模型,Q4_1 模型经初步测试后已上传,NF4格式由于已经失去支持,所以不会提供。
39
-
40
- According to the preliminary test results, the Q4_1 model performs well in the output of larger objects such as characters, but when it is necessary more details, it is found that more details are lost, so it is recommended try to choose the FP8/Q8_0 model if the hardware allows, and the Q4_1 model had uploaded. The NF4 format will not be available as it has deprecated.
41
-
42
- 过度量化将失去本高精细模型的优势,所以,除 Q4_1 外,将不再提供别的量化版本,如有需要,朋友们可根据下面提示信息,自己下载 fp8 后量化。
43
-
44
- Over-quantization will lose the advantages of this high-precision model, so in addition to Q4_1, no other quantization will be provided, if necessary, you can download FP8 and quantizate it according to the following tips.
45
 
46
  # Recommend:
47
 
@@ -58,10 +64,16 @@ Over-quantization will lose the advantages of this high-precision model, so in a
58
 
59
  # Thanks for:
60
 
 
 
 
 
61
  https://huggingface.co/Anibaaal, Flux-Fusion is a very good mix and tuned model.
62
 
63
  https://huggingface.co/nyanko7, Flux-dev-de-distill is a great experimental project! thanks for the [inference.py](https://huggingface.co/nyanko7/flux-dev-de-distill/blob/main/inference.py) scripts.
64
 
 
 
65
  https://huggingface.co/MonsterMMORPG, Furkan share a lot of Flux.1 model testing and tuning courses, some special test for the de-distill model.
66
 
67
  https://github.com/cubiq/Block_Patcher_ComfyUI, cubiq's Flux blocks patcher sampler let me do a lot of test to know how the Flux.1 block parameter value change the image gerentrating. His [ComfyUI_essentials](https://github.com/cubiq/ComfyUI_essentials) have a FluxBlocksBuster node, let me can adjust the blocks value easy. that is a great work!
 
20
 
21
  [Also on CivitAI](https://civitai.com/models/941929)
22
 
23
+ **洗净蒸馏油腻,回归模型本真。**
24
+
25
+ **Wash away the distillation and return to the original basic.**
26
+
27
+ **可能是目前基于 Flux.1 Schnell 调制的各种模型中,快速出图(4-8步),遵循原版 Flux Schnell 构图风格,提示词还原能力强,且在出图质量、出图细节、回归真实和风格多样化方面取得最佳平衡的开源可商用 Schnell 基础模型。**
28
+
29
+ **Only 4 step, The Model may achieve to the best balance in terms of image quality, details, reality, and style diversity compare with other tuned of Flux.1 Schnell. and have a good ability of prompt following, good of the original Flux model style following.**
30
+
31
+ Based on [**FLUX.1-schnell**](https://huggingface.co/black-forest-labs/FLUX.1-schnell), Merge of [**LibreFLUX**](https://huggingface.co/jimmycarter/LibreFLUX), finetuned by [**ComfyUI**](https://github.com/comfyanonymous/ComfyUI), [**Block_Patcher_ComfyUI**](https://github.com/cubiq/Block_Patcher_ComfyUI), [**ComfyUI_essentials**](https://github.com/cubiq/ComfyUI_essentials) and other tools. Recommended 4-8 steps, usually step 4 is OK. Greatly improved quality and reality compare to other Flux.1 Schnell model.
32
+
33
+ ![](./compare-schnell.jpg)
34
+
35
+ ================================================================================
36
+
37
  **可能是目前快速出图(10步以内)的 Flux 微调模型中,遵循原版 Flux.1 Dev 风格,提示词还原能力强、出图质量最好、出图细节超越 Flux.1 Dev 模型,最接近 Flux.1 Pro 的基础模型。**
38
 
39
  **May be the Best Quality Step 6-10 Model, In some details, it surpasses the Flux.1 Dev model and approaches the Flux.1 Pro model. and have good ability of prompt following, good of the original Flux.1 Dev style following.**
 
43
 
44
  ![](./compare.jpg)
45
 
46
+ ================================================================================
47
 
48
+ GGUF Q8_0 / Q5_1 /Q4_1 量化版本模型文件,经过测试,已同步提供,将不会再提供别的量化版本,如有需要,朋友们可根据下面提示信息,自己下载 fp8 后进行量化。
49
 
50
+ GGUF Q8_0 / Q5_1 /Q4_1 quantized model file, had tested, and uploaded the same time, over-quantization will lose the advantages of this high-speed and high-precision model, so no other quantization will be provided, you can download the FP8 model file and quantizate it according to the following tips.
 
 
 
 
 
 
 
 
51
 
52
  # Recommend:
53
 
 
64
 
65
  # Thanks for:
66
 
67
+ https://huggingface.co/black-forest-labs/FLUX.1-dev, A very good open source T2I model. under the FLUX.1 [dev] Non-Commercial License.
68
+
69
+ https://huggingface.co/black-forest-labs/FLUX.1-schnell, A very good open source T2I model, under the apache-2.0 licence.
70
+
71
  https://huggingface.co/Anibaaal, Flux-Fusion is a very good mix and tuned model.
72
 
73
  https://huggingface.co/nyanko7, Flux-dev-de-distill is a great experimental project! thanks for the [inference.py](https://huggingface.co/nyanko7/flux-dev-de-distill/blob/main/inference.py) scripts.
74
 
75
+ https://huggingface.co/jimmycarter/LibreFLUX, A free, de-distilled FLUX model, is an Apache 2.0 version of FLUX.1-schnell.
76
+
77
  https://huggingface.co/MonsterMMORPG, Furkan share a lot of Flux.1 model testing and tuning courses, some special test for the de-distill model.
78
 
79
  https://github.com/cubiq/Block_Patcher_ComfyUI, cubiq's Flux blocks patcher sampler let me do a lot of test to know how the Flux.1 block parameter value change the image gerentrating. His [ComfyUI_essentials](https://github.com/cubiq/ComfyUI_essentials) have a FluxBlocksBuster node, let me can adjust the blocks value easy. that is a great work!