wikeeyang commited on
Commit
592e8f2
·
verified ·
1 Parent(s): 64ff3ec

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -1
README.md CHANGED
@@ -31,6 +31,11 @@ Recommended 6-10 steps. Greatly improved quality compared to other Flux.1 model.
31
 
32
  ![](./compare.jpg)
33
 
 
 
 
 
 
34
  ----
35
 
36
  # Recommend:
@@ -40,8 +45,9 @@ Recommended 6-10 steps. Greatly improved quality compared to other Flux.1 model.
40
  - Long CLIP: https://huggingface.co/zer0int/CLIP-GmP-ViT-L-14/blob/main/ViT-L-14-TEXT-detail-improved-hiT-GmP-TE-only-HF.safetensors
41
  - Text Encoders: https://huggingface.co/silveroxides/CLIP-Collection/blob/main/t5xxl_flan_latest-fp8_e4m3fn.safetensors
42
  - VAE: https://huggingface.co/black-forest-labs/FLUX.1-schnell/tree/main/vae
 
43
 
44
- **Simple workflow ** very simple workflow as below, needn't any other comfy custom nodes:
45
 
46
  ![](./workflow.png)
47
 
@@ -61,6 +67,9 @@ https://huggingface.co/twodgirl, Share the model quantization script and the tes
61
 
62
  https://huggingface.co/John6666, Share the model convert script and the model collections.
63
 
 
 
 
64
  ------
65
 
66
  ## LICENSE
 
31
 
32
  ![](./compare.jpg)
33
 
34
+ 应网友要求,特提供 GGUF Q8_0 量化版本模型文件,针对该版本模型文件,由于时间关系,未做详细测试,可能出图细节与 fp8 模型略微有细小差异,但应该相差不大,同时,模型文件加载方式和模型转换方法,已在下面附加描述,测试图片效果如下:
35
+
36
+ At the request, GGUF Q8_0 quantified version of the model file is provided, for this version of the model file, due to the time relationship, no detailed test has been done, the details of the drawing may be slightly different from the fp8 model, but it should not be much different, at the same time, the model file loading method and model conversion method have been described below.
37
+
38
+ ![](./gguf-sample.png)
39
  ----
40
 
41
  # Recommend:
 
45
  - Long CLIP: https://huggingface.co/zer0int/CLIP-GmP-ViT-L-14/blob/main/ViT-L-14-TEXT-detail-improved-hiT-GmP-TE-only-HF.safetensors
46
  - Text Encoders: https://huggingface.co/silveroxides/CLIP-Collection/blob/main/t5xxl_flan_latest-fp8_e4m3fn.safetensors
47
  - VAE: https://huggingface.co/black-forest-labs/FLUX.1-schnell/tree/main/vae
48
+ - GGUF Version: you need install GGUF model support nodes, https://github.com/city96/ComfyUI-GGUF
49
 
50
+ **Simple workflow **: a very simple workflow as below, needn't any other comfy custom nodes(For GGUF version, please use UNET Loader(GGUF) node of city96's):
51
 
52
  ![](./workflow.png)
53
 
 
67
 
68
  https://huggingface.co/John6666, Share the model convert script and the model collections.
69
 
70
+ https://github.com/city96/ComfyUI-GGUF, Native support GGUF Quantization Model.
71
+
72
+ https://github.com/leejet/stable-diffusion.cpp, Provider pure C/C++ GGUF model convert scripts.
73
  ------
74
 
75
  ## LICENSE