File size: 11,631 Bytes
4966da1
 
 
42217f6
530b61a
 
4966da1
a8ca6b8
a41185c
276c295
a4b8307
9ccdc7c
271bb72
4966da1
a41185c
9489fe0
271bb72
 
 
9489fe0
009763c
271bb72
a4b8307
276c295
a41185c
276c295
271bb72
 
 
 
 
9489fe0
276c295
 
 
 
 
 
42217f6
276c295
42217f6
4966da1
a4b8307
276c295
9235f2f
 
 
 
 
 
 
009763c
 
 
 
 
 
9235f2f
 
12586d7
 
 
9235f2f
 
 
a41185c
4966da1
e74fe76
271bb72
d34432f
271bb72
 
 
 
276c295
 
009763c
9235f2f
 
 
276c295
009763c
276c295
 
 
 
009763c
c34d875
 
 
 
009763c
a4b8307
 
 
276c295
a4b8307
4966da1
d34432f
4966da1
a8ca6b8
 
 
4966da1
a8ca6b8
 
4966da1
a4b8307
276c295
4966da1
 
276c295
4966da1
276c295
4966da1
276c295
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/LICENSE.md
base_model:
- black-forest-labs/FLUX.1-dev
---

<img src="https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/resolve/main/images/banners/ComfyUI_2024-11-30_0088.jpg" alt="FLUX">

**====================================**

**FLUX Model Merges & Tweaks: Detail Enhancement and Acceleration**

<img src="https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/resolve/main/images/banners/ComfyUI_2024-11-30_0093.jpg" alt="FLUX Model Merges & Tweaks: Detail Enhancement and Acceleration">

This repository contains merged models, built upon the base models:
- [Freepik's Flux.1-Lite-8B-alpha](https://huggingface.co/Freepik/flux.1-lite-8B-alpha)
- [Flux-dev-de-distill](https://huggingface.co/nyanko7/flux-dev-de-distill)

Detail enhancement and acceleration techniques have been applied, particularly optimized for NVIDIA 4XXX cards (maybe 3XXX too). The goal is to have high efficiency accelerated models with lower overhead. The (de-re-destill) model can be used with CFG remaining at 1. Also the baked-in accelerators work as intended.

**====================================**

<img src="https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/resolve/main/images/banners/ComfyUI_2024-11-30_0097.jpg" alt="Detail Plus!">

**Detail Plus! - Built upon the base model [Freepik's Flux.1-Lite-8B-alpha](https://huggingface.co/Freepik/flux.1-lite-8B-alpha):**

**Detail Enhancement Used:**
- **Style LORA - Extreme Detailer for FLUX.1-dev** (Weight: 0.5) ([Model Link](https://civitai.com/models/832683))
- **Best of Flux: Style Enhancing LoRA** (Weight: 0.25) ([Model Link](https://civitai.com/models/821668))

1. **SAFETensors Format (fp8_34m3fn_fast)**:
   - [flux.1-lite-8B-alpha-Hyper-8.Steps-Detail.Plus-fp8_e4m3fn_fast](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/flux.1-lite-8B-alpha-Hyper-8.Steps-Detail.Plus-fp8_e4m3fn_fast.safetensors)
   - [flux.1-lite-8B-alpha-Hyper-16.Steps-Detail.Plus-fp8_e4m3fn_fast](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/flux.1-lite-8B-alpha-Hyper-16.Steps-Detail.Plus-fp8_e4m3fn_fast.safetensors)
   - [flux.1-lite-8B-alpha-Turbo-8.Steps-Detail.Plus-fp8_e4m3fn_fast](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/flux.1-lite-8B-alpha-Turbo-8.Steps-Detail.Plus-fp8_e4m3fn_fast.safetensors)

3. **GGUF Quantized Models (Q8_0)**:
   - [flux.1-lite-8B-alpha-Hyper-8.Steps-Detail.Plus.Steps.Q8_0_quantized](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/flux.1-lite-8B-alpha-Hyper-8.Steps-Detail.Plus.Steps.Q8_0_quantized.gguf)
   - [flux.1-lite-8B-alpha-Hyper-16.Steps-Detail.Plus.Steps.Q8_0_quantized](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/flux.1-lite-8B-alpha-Hyper-16.Steps-Detail.Plus.Steps.Q8_0_quantized.gguf)
   - [flux.1-lite-8B-alpha-Turbo-8.Steps-Detail.Plus.Steps.Q8_0_quantized](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/flux.1-lite-8B-alpha-Turbo-8.Steps-Detail.Plus.Steps.Q8_0_quantized.gguf)

**====================================**

**Detail Plus! Distilled - Built upon the base model [Freepik's Flux.1-Lite-8B-alpha](https://huggingface.co/Freepik/flux.1-lite-8B-alpha):**

**Detail Enhancement Used:**
- **Style LORA - Extreme Detailer for FLUX.1-dev** (Weight: 0.15) ([Model Link](https://civitai.com/models/832683))
- **Best of Flux: Style Enhancing LoRA** (Weight: 0.06) ([Model Link](https://civitai.com/models/821668))

**Distillation Used:**
- **Flux distilled lora** (Weight: Hyper - 0.65 | Turbo - 0.50) ([Model Link](https://civitai.com/models/977247/flux-distilled-lora))

1. **SAFETensors Format (fp8_34m3fn_fast)**:
   - [flux.1-lite-8B-alpha-Hyper-8.Steps-Detail.Plus-Distilled-fp8_e4m3fn_fast-V2](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/Lite-8B-Plus/Distilled/FP8/flux.1-lite-8B-alpha-Hyper-8.Steps-Detail.Plus-Distilled-fp8_e4m3fn_fast-V2.safetensors)
   - [flux.1-lite-8B-alpha-Hyper-16.Steps-Detail.Plus-Distilled-fp8_e4m3fn_fast-V2](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/Lite-8B-Plus/Distilled/FP8/flux.1-lite-8B-alpha-Hyper-16.Steps-Detail.Plus-Distilled-fp8_e4m3fn_fast-V2.safetensors)
   - [flux.1-lite-8B-alpha-Turbo-8.Steps-Detail.Plus-Distilled-fp8_e4m3fn_fast-V2](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/Lite-8B-Plus/Distilled/FP8/flux.1-lite-8B-alpha-Turbo-8.Steps-Detail.Plus-Distilled-fp8_e4m3fn_fast-V2.safetensors)

3. **GGUF Quantized Models (Q8_0)**:
   - [flux.1-lite-8B-alpha-Hyper-8.Steps-Detail.Plus-Distilled-V2-Q8_0](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/Lite-8B-Plus/Distilled/GGUF/Q8_0/flux.1-lite-8B-alpha-Hyper-8.Steps-Detail.Plus-Distilled-V2-Q8_0.gguf)
   - [flux.1-lite-8B-alpha-Hyper-16.Steps-Detail.Plus-Distilled-V2-Q8_0](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/Lite-8B-Plus/Distilled/GGUF/Q8_0/flux.1-lite-8B-alpha-Hyper-16.Steps-Detail.Plus-Distilled-V2-Q8_0.gguf)
   - [flux.1-lite-8B-alpha-Turbo-8.Steps-Detail.Plus-Distilled-V2-Q8_0](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/Lite-8B-Plus/Distilled/GGUF/Q8_0/flux.1-lite-8B-alpha-Turbo-8.Steps-Detail.Plus-Distilled-V2-Q8_0.gguf)

**====================================**

<img src="https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/resolve/main/images/banners/ComfyUI_2024-11-30_0119.jpg" alt="Detail Plus! De-Re-Distilled">

**Detail Plus! De-Re-Distilled (Built upon the base model [Flux-dev-de-distill](https://huggingface.co/nyanko7/flux-dev-de-distill):**

**Detail Enhancement Used:**
- **Style LORA - Extreme Detailer for FLUX.1-dev** (Weight: 0.15) ([Model Link](https://civitai.com/models/832683))
- **Best of Flux: Style Enhancing LoRA** (Weight: 0.15) ([Model Link](https://civitai.com/models/821668))

**Re-Distillation Used:**
- **Flux distilled lora** (Weight: -1.00) ([Model Link](https://civitai.com/models/977247/flux-distilled-lora))

2. **SAFETensors Format V2 (fp8_34m3fn_fast)**:
   - [Flux-dev-de-re-distill-Hyper.8.Step-Detail.Plus-ReDis-fp8_e4m3fn_fast-V2](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/de-re-distill/FP8/Version%202/Flux-dev-de-re-distill-Hyper.8.Step-Detail.Plus-ReDis-fp8_e4m3fn_fast-V2.safetensors)
   - [Flux-dev-de-re-distill-Hyper.16.Step-Detail.Plus-ReDis-fp8_e4m3fn_fast-V2](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/de-re-distill/FP8/Version%202/Flux-dev-de-re-distill-Hyper.16.Step-Detail.Plus-ReDis-fp8_e4m3fn_fast-V2.safetensors)
   - [Flux-dev-de-re-distill-Turbo.8.Step-Detail.Plus-ReDis-fp8_e4m3fn_fast-V2](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/de-re-distill/FP8/Version%202/Flux-dev-de-re-distill-Turbo.8.Step-Detail.Plus-ReDis-fp8_e4m3fn_fast-V2.safetensors)

5. **GGUF Quantized Models (Q8_0)**:
   - [Flux-dev-de-re-distill-Hyper.8.Step-Detail.Plus-ReDis-MAIN-V2-Q8_0](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/de-re-distill/GGUF/Q8_0/Flux-dev-de-re-distill-Hyper.8.Step-Detail.Plus-ReDis-MAIN-V2-Q8_0.gguf)
   - [Flux-dev-de-re-distill-Hyper.16.Step-Detail.Plus-ReDis-MAIN-V2-Q8_0](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/de-re-distill/GGUF/Q8_0/Flux-dev-de-re-distill-Hyper.16.Step-Detail.Plus-ReDis-MAIN-V2-Q8_0.gguf)
   - [Flux-dev-de-re-distill-Turbo.8.Step-Detail.Plus-ReDis-MAIN-V2-Q8_0](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/de-re-distill/GGUF/Q8_0/Flux-dev-de-re-distill-Turbo.8.Step-Detail.Plus-ReDis-MAIN-V2-Q8_0.gguf)

6. **GGUF Quantized Models (Q6_K)**:
   - [Flux-dev-de-re-distill-Hyper.8.Step-Detail.Plus-ReDis-MAIN-V2-Q6_K](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/de-re-distill/GGUF/Q6_K/Flux-dev-de-re-distill-Hyper.8.Step-Detail.Plus-ReDis-MAIN-V2-Q6_K.gguf)
   - [Flux-dev-de-re-distill-Hyper.16.Step-Detail.Plus-ReDis-MAIN-V2-Q6_K](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/de-re-distill/GGUF/Q6_K/Flux-dev-de-re-distill-Hyper.16.Step-Detail.Plus-ReDis-MAIN-V2-Q6_K.gguf)
   - [Flux-dev-de-re-distill-Turbo.8.Step-Detail.Plus-ReDis-MAIN-V2-Q6_K](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/de-re-distill/GGUF/Q6_K/Flux-dev-de-re-distill-Turbo.8.Step-Detail.Plus-ReDis-MAIN-V2-Q6_K.gguf)

7. **GGUF Quantized Models (Q4_K_S)**:
   - [Flux-dev-de-re-distill-Hyper.8.Step-Detail.Plus-ReDis-MAIN-V2-Q4_K_S](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/de-re-distill/GGUF/Q4_K_S/Flux-dev-de-re-distill-Hyper.8.Step-Detail.Plus-ReDis-MAIN-V2-Q4_K_S.gguf)
   - [Flux-dev-de-re-distill-Hyper.16.Step-Detail.Plus-ReDis-MAIN-V2-Q4_K_S](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/de-re-distill/GGUF/Q4_K_S/Flux-dev-de-re-distill-Hyper.16.Step-Detail.Plus-ReDis-MAIN-V2-Q4_K_S.gguf)
   - [Flux-dev-de-re-distill-Turbo.8.Step-Detail.Plus-ReDis-MAIN-V2-Q4_K_S](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/de-re-distill/GGUF/Q4_K_S/Flux-dev-de-re-distill-Turbo.8.Step-Detail.Plus-ReDis-MAIN-V2-Q4_K_S.gguf)

**====================================**

**Acceleration Credits:**

* [ByteDance](https://huggingface.co/ByteDance), a leading AI technology company, optimized the following models:
   - [Hyper-FLUX.1-dev-16steps](https://huggingface.co/ByteDance/Hyper-SD/blob/main/Hyper-FLUX.1-dev-16steps-lora.safetensors)
   - [Hyper-FLUX.1-dev-8steps](https://huggingface.co/ByteDance/Hyper-SD/blob/main/Hyper-FLUX.1-dev-8steps-lora.safetensors)

* [Alimama Creative](https://huggingface.co/alimama-creative), a renowned NLP innovator, optimized the following model:
   - [FLUX.1-Turbo-Alpha](https://huggingface.co/alimama-creative/FLUX.1-Turbo-Alpha)

**====================================**

**Attribution and Licensing Notice:**

The [FLUX.1-dev Model](https://huggingface.co/black-forest-labs/FLUX.1-dev) is licensed by Black Forest Labs, Inc. under the FLUX.1-dev [Non-Commercial License](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md). Copyright Black Forest Labs, Inc.

Our model weights are released under the FLUX.1-dev [Non-Commercial License](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/LICENSE.md).

This merge combines the strengths of these models, applying detail enhancement and acceleration techniques to create a unique and powerful AI model built upon [Freepik's Flux.1-Lite-8B-alpha](https://huggingface.co/Freepik/flux.1-lite-8B-alpha) & [Flux-dev-de-distill](https://huggingface.co/nyanko7/flux-dev-de-distill). We hope this contributes positively to the development of the NLP community!