Update README.md
Browse files
README.md
CHANGED
@@ -8,7 +8,7 @@ base_model:
|
|
8 |
|
9 |
<img src="https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/resolve/main/images/banners/ComfyUI_2024-11-30_0088.jpg" alt="FLUX">
|
10 |
|
11 |
-
|
12 |
|
13 |
**FLUX Model Merges & Tweaks: Detail Enhancement and Acceleration**
|
14 |
|
@@ -20,7 +20,7 @@ This repository contains merged models, built upon the base models:
|
|
20 |
|
21 |
Detail enhancement and acceleration techniques have been applied, particularly optimized for NVIDIA 4XXX cards (maybe 3XXX too). The goal is to have high efficiency accelerated models with lower overhead. The (de-re-destill) model can be used with CFG remaining at 1. Also the baked-in accelerators work as intended.
|
22 |
|
23 |
-
|
24 |
|
25 |
<img src="https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/resolve/main/images/banners/ComfyUI_2024-11-30_0097.jpg" alt="Detail Plus!">
|
26 |
|
@@ -36,16 +36,16 @@ Detail enhancement and acceleration techniques have been applied, particularly o
|
|
36 |
- [flux.1-lite-8B-alpha-Turbo-8.Steps-Detail.Plus-fp8_e4m3fn_fast](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/flux.1-lite-8B-alpha-Turbo-8.Steps-Detail.Plus-fp8_e4m3fn_fast.safetensors)
|
37 |
|
38 |
2. **SAFETensors Format (FULL)**:
|
39 |
-
- flux.1-lite-8B-alpha-Hyper-8.Steps-Detail.Plus
|
40 |
-
- flux.1-lite-8B-alpha-Hyper-16.Steps-Detail.Plus
|
41 |
-
- flux.1-lite-8B-alpha-Turbo-8.Steps-Detail.Plus
|
42 |
|
43 |
3. **GGUF Quantized Models (Q8_0)**:
|
44 |
- [flux.1-lite-8B-alpha-Hyper-8.Steps-Detail.Plus.Steps.Q8_0_quantized](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/flux.1-lite-8B-alpha-Hyper-8.Steps-Detail.Plus.Steps.Q8_0_quantized.gguf)
|
45 |
- [flux.1-lite-8B-alpha-Hyper-16.Steps-Detail.Plus.Steps.Q8_0_quantized](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/flux.1-lite-8B-alpha-Hyper-16.Steps-Detail.Plus.Steps.Q8_0_quantized.gguf)
|
46 |
- [flux.1-lite-8B-alpha-Turbo-8.Steps-Detail.Plus.Steps.Q8_0_quantized](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/flux.1-lite-8B-alpha-Turbo-8.Steps-Detail.Plus.Steps.Q8_0_quantized.gguf)
|
47 |
|
48 |
-
|
49 |
|
50 |
<img src="https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/resolve/main/images/banners/ComfyUI_2024-11-30_0119.jpg" alt="Detail Plus! De-Re-Distilled">
|
51 |
|
@@ -64,14 +64,14 @@ Detail enhancement and acceleration techniques have been applied, particularly o
|
|
64 |
- [Flux-dev-de-re-distill-Turbo.8.Step-Detail.Plus-ReDis-fp8_e4m3fn_fast](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/de-re-distill/FP8/Version%202/Flux-dev-de-re-distill-Turbo.8.Step-Detail.Plus-ReDis-fp8_e4m3fn_fast-V2.safetensors)
|
65 |
|
66 |
2. **SAFETensors Format (FULL)**:
|
67 |
-
- Flux-dev-de-re-distill-Hyper.8.Step-Detail.Plus-ReDis-MAIN-
|
68 |
-
- Flux-dev-de-re-distill-Hyper.16.Step-Detail.Plus-ReDis-MAIN-
|
69 |
-
- Flux-dev-de-re-distill-Turbo.8.Step-Detail.Plus-ReDis-MAIN-V2
|
70 |
|
71 |
3. **GGUF Quantized Models (BF16)**:
|
72 |
-
- Flux-dev-de-re-distill-Hyper.8.Step-Detail.Plus-ReDis-MAIN-V2-BF16
|
73 |
-
- Flux-dev-de-re-distill-Hyper.16.Step-Detail.Plus-ReDis-MAIN-V2-BF16
|
74 |
-
- Flux-dev-de-re-distill-Turbo.8.Step-Detail.Plus-ReDis-MAIN-V2-BF16
|
75 |
|
76 |
4. **GGUF Quantized Models (Q8_0)**:
|
77 |
- [Flux-dev-de-re-distill-Hyper.8.Step-Detail.Plus-ReDis-MAIN-V2-Q8_0](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/de-re-distill/GGUF/Q8_0/Flux-dev-de-re-distill-Hyper.8.Step-Detail.Plus-ReDis-MAIN-V2-Q8_0.gguf)
|
@@ -79,16 +79,16 @@ Detail enhancement and acceleration techniques have been applied, particularly o
|
|
79 |
- [Flux-dev-de-re-distill-Turbo.8.Step-Detail.Plus-ReDis-MAIN-V2-Q8_0](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/de-re-distill/GGUF/Q8_0/Flux-dev-de-re-distill-Turbo.8.Step-Detail.Plus-ReDis-MAIN-V2-Q8_0.gguf)
|
80 |
|
81 |
5. **GGUF Quantized Models (Q6_K)**:
|
82 |
-
- Flux-dev-de-re-distill-Hyper.8.Step-Detail.Plus-ReDis-MAIN-V2-Q6_K
|
83 |
-
- Flux-dev-de-re-distill-Hyper.16.Step-Detail.Plus-ReDis-MAIN-V2-Q6_K
|
84 |
-
- Flux-dev-de-re-distill-Turbo.8.Step-Detail.Plus-ReDis-MAIN-V2-Q6_K
|
85 |
|
86 |
6. **GGUF Quantized Models (Q4_K_S)**:
|
87 |
-
- Flux-dev-de-re-distill-Hyper.8.Step-Detail.Plus-ReDis-MAIN-V2-Q4_K_S.gguf
|
88 |
-
- Flux-dev-de-re-distill-Hyper.16.Step-Detail.Plus-ReDis-MAIN-V2-Q4_K_S.gguf
|
89 |
-
- Flux-dev-de-re-distill-Turbo.8.Step-Detail.Plus-ReDis-MAIN-V2-Q4_K_S.gguf
|
90 |
|
91 |
-
|
92 |
|
93 |
**Acceleration Credits:**
|
94 |
|
@@ -99,7 +99,7 @@ Detail enhancement and acceleration techniques have been applied, particularly o
|
|
99 |
* [Alimama Creative](https://huggingface.co/alimama-creative), a renowned NLP innovator, optimized the following model:
|
100 |
- [FLUX.1-Turbo-Alpha](https://huggingface.co/alimama-creative/FLUX.1-Turbo-Alpha)
|
101 |
|
102 |
-
|
103 |
|
104 |
**Attribution and Licensing Notice:**
|
105 |
|
|
|
8 |
|
9 |
<img src="https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/resolve/main/images/banners/ComfyUI_2024-11-30_0088.jpg" alt="FLUX">
|
10 |
|
11 |
+
**====================================**
|
12 |
|
13 |
**FLUX Model Merges & Tweaks: Detail Enhancement and Acceleration**
|
14 |
|
|
|
20 |
|
21 |
Detail enhancement and acceleration techniques have been applied, particularly optimized for NVIDIA 4XXX cards (maybe 3XXX too). The goal is to have high efficiency accelerated models with lower overhead. The (de-re-destill) model can be used with CFG remaining at 1. Also the baked-in accelerators work as intended.
|
22 |
|
23 |
+
**====================================**
|
24 |
|
25 |
<img src="https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/resolve/main/images/banners/ComfyUI_2024-11-30_0097.jpg" alt="Detail Plus!">
|
26 |
|
|
|
36 |
- [flux.1-lite-8B-alpha-Turbo-8.Steps-Detail.Plus-fp8_e4m3fn_fast](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/flux.1-lite-8B-alpha-Turbo-8.Steps-Detail.Plus-fp8_e4m3fn_fast.safetensors)
|
37 |
|
38 |
2. **SAFETensors Format (FULL)**:
|
39 |
+
- [flux.1-lite-8B-alpha-Hyper-8.Steps-Detail.Plus]()
|
40 |
+
- [flux.1-lite-8B-alpha-Hyper-16.Steps-Detail.Plus]()
|
41 |
+
- [flux.1-lite-8B-alpha-Turbo-8.Steps-Detail.Plus]()
|
42 |
|
43 |
3. **GGUF Quantized Models (Q8_0)**:
|
44 |
- [flux.1-lite-8B-alpha-Hyper-8.Steps-Detail.Plus.Steps.Q8_0_quantized](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/flux.1-lite-8B-alpha-Hyper-8.Steps-Detail.Plus.Steps.Q8_0_quantized.gguf)
|
45 |
- [flux.1-lite-8B-alpha-Hyper-16.Steps-Detail.Plus.Steps.Q8_0_quantized](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/flux.1-lite-8B-alpha-Hyper-16.Steps-Detail.Plus.Steps.Q8_0_quantized.gguf)
|
46 |
- [flux.1-lite-8B-alpha-Turbo-8.Steps-Detail.Plus.Steps.Q8_0_quantized](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/flux.1-lite-8B-alpha-Turbo-8.Steps-Detail.Plus.Steps.Q8_0_quantized.gguf)
|
47 |
|
48 |
+
**====================================**
|
49 |
|
50 |
<img src="https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/resolve/main/images/banners/ComfyUI_2024-11-30_0119.jpg" alt="Detail Plus! De-Re-Distilled">
|
51 |
|
|
|
64 |
- [Flux-dev-de-re-distill-Turbo.8.Step-Detail.Plus-ReDis-fp8_e4m3fn_fast](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/de-re-distill/FP8/Version%202/Flux-dev-de-re-distill-Turbo.8.Step-Detail.Plus-ReDis-fp8_e4m3fn_fast-V2.safetensors)
|
65 |
|
66 |
2. **SAFETensors Format (FULL)**:
|
67 |
+
- [Flux-dev-de-re-distill-Hyper.8.Step-Detail.Plus-ReDis-MAIN-V2s]()
|
68 |
+
- [Flux-dev-de-re-distill-Hyper.16.Step-Detail.Plus-ReDis-MAIN-V2s]()
|
69 |
+
- [Flux-dev-de-re-distill-Turbo.8.Step-Detail.Plus-ReDis-MAIN-V2]()
|
70 |
|
71 |
3. **GGUF Quantized Models (BF16)**:
|
72 |
+
- [Flux-dev-de-re-distill-Hyper.8.Step-Detail.Plus-ReDis-MAIN-V2-BF16]()
|
73 |
+
- [Flux-dev-de-re-distill-Hyper.16.Step-Detail.Plus-ReDis-MAIN-V2-BF16]()
|
74 |
+
- [Flux-dev-de-re-distill-Turbo.8.Step-Detail.Plus-ReDis-MAIN-V2-BF16]()
|
75 |
|
76 |
4. **GGUF Quantized Models (Q8_0)**:
|
77 |
- [Flux-dev-de-re-distill-Hyper.8.Step-Detail.Plus-ReDis-MAIN-V2-Q8_0](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/de-re-distill/GGUF/Q8_0/Flux-dev-de-re-distill-Hyper.8.Step-Detail.Plus-ReDis-MAIN-V2-Q8_0.gguf)
|
|
|
79 |
- [Flux-dev-de-re-distill-Turbo.8.Step-Detail.Plus-ReDis-MAIN-V2-Q8_0](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/de-re-distill/GGUF/Q8_0/Flux-dev-de-re-distill-Turbo.8.Step-Detail.Plus-ReDis-MAIN-V2-Q8_0.gguf)
|
80 |
|
81 |
5. **GGUF Quantized Models (Q6_K)**:
|
82 |
+
- [Flux-dev-de-re-distill-Hyper.8.Step-Detail.Plus-ReDis-MAIN-V2-Q6_K]()
|
83 |
+
- [Flux-dev-de-re-distill-Hyper.16.Step-Detail.Plus-ReDis-MAIN-V2-Q6_K]()
|
84 |
+
- [Flux-dev-de-re-distill-Turbo.8.Step-Detail.Plus-ReDis-MAIN-V2-Q6_K]()
|
85 |
|
86 |
6. **GGUF Quantized Models (Q4_K_S)**:
|
87 |
+
- [Flux-dev-de-re-distill-Hyper.8.Step-Detail.Plus-ReDis-MAIN-V2-Q4_K_S](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/de-re-distill/GGUF/Q4_K_S/Flux-dev-de-re-distill-Hyper.8.Step-Detail.Plus-ReDis-MAIN-V2-Q4_K_S.gguf)
|
88 |
+
- [Flux-dev-de-re-distill-Hyper.16.Step-Detail.Plus-ReDis-MAIN-V2-Q4_K_S](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/de-re-distill/GGUF/Q4_K_S/Flux-dev-de-re-distill-Hyper.16.Step-Detail.Plus-ReDis-MAIN-V2-Q4_K_S.gguf)
|
89 |
+
- [Flux-dev-de-re-distill-Turbo.8.Step-Detail.Plus-ReDis-MAIN-V2-Q4_K_S](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/de-re-distill/GGUF/Q4_K_S/Flux-dev-de-re-distill-Turbo.8.Step-Detail.Plus-ReDis-MAIN-V2-Q4_K_S.gguf)
|
90 |
|
91 |
+
**====================================**
|
92 |
|
93 |
**Acceleration Credits:**
|
94 |
|
|
|
99 |
* [Alimama Creative](https://huggingface.co/alimama-creative), a renowned NLP innovator, optimized the following model:
|
100 |
- [FLUX.1-Turbo-Alpha](https://huggingface.co/alimama-creative/FLUX.1-Turbo-Alpha)
|
101 |
|
102 |
+
**====================================**
|
103 |
|
104 |
**Attribution and Licensing Notice:**
|
105 |
|