mdmachine commited on
Commit
12586d7
·
verified ·
1 Parent(s): 4b7b733

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -5
README.md CHANGED
@@ -18,7 +18,9 @@ This repository contains merged models, built upon the base models:
18
  - [Freepik's Flux.1-Lite-8B-alpha](https://huggingface.co/Freepik/flux.1-lite-8B-alpha)
19
  - [Flux-dev-de-distill](https://huggingface.co/nyanko7/flux-dev-de-distill)
20
 
21
- Detail enhancement and acceleration techniques have been applied, particularly optimized for NVIDIA 4XXX cards (maybe 3XXX too). The goal is to have high efficiency accelerated models with lower overhead. The (de-re-destill | Distilled) models can be used with CFG remaining at 1ish. Also the baked-in accelerators work as intended. This is a result of workflow optimization by-products. Things can be added/removed or changed at any time.
 
 
22
 
23
  **====================================**
24
 
@@ -49,12 +51,12 @@ Detail enhancement and acceleration techniques have been applied, particularly o
49
  - **Best of Flux: Style Enhancing LoRA** (Weight: 0.06) ([Model Link](https://civitai.com/models/821668))
50
 
51
  **Distillation Used:**
52
- - **Flux distilled lora** (Weight: Hyper - 1.00 | Turbo - 0.50) ([Model Link](https://civitai.com/models/977247/flux-distilled-lora))
53
 
54
  3. **GGUF Quantized Models (Q8_0)**:
55
- - [flux.1-lite-8B-alpha-Hyper-8.Steps-Detail.Plus-Distilled-Q8_0](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/Lite-8B-Plus/Distilled/GGUF/Q8_0/flux.1-lite-8B-alpha-Hyper-8.Steps-Detail.Plus-Distilled-Q8_0.gguf)
56
- - [flux.1-lite-8B-alpha-Hyper-16.Steps-Detail.Plus-Distilled-Q8_0](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/Lite-8B-Plus/Distilled/GGUF/Q8_0/flux.1-lite-8B-alpha-Hyper-16.Steps-Detail.Plus-Distilled-Q8_0.gguf)
57
- - [flux.1-lite-8B-alpha-Turbo-8.Steps-Detail.Plus-Distilled-Q8_0](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/Lite-8B-Plus/Distilled/GGUF/Q8_0/flux.1-lite-8B-alpha-Turbo-8.Steps-Detail.Plus-Distilled-Q8_0.gguf)
58
 
59
  **====================================**
60
 
 
18
  - [Freepik's Flux.1-Lite-8B-alpha](https://huggingface.co/Freepik/flux.1-lite-8B-alpha)
19
  - [Flux-dev-de-distill](https://huggingface.co/nyanko7/flux-dev-de-distill)
20
 
21
+ Detail enhancement and acceleration techniques have been applied, particularly optimized for NVIDIA 4XXX cards (maybe 3XXX too). The goal is to have high efficiency accelerated models with lower overhead.
22
+
23
+ The (de-re-destill | Distilled) models can be used with CFG remaining at 1-ish. Also the baked-in accelerators work (mostly) as intended. This is a result of workflow optimization by-products. Things can be added/removed or changed at any time.
24
 
25
  **====================================**
26
 
 
51
  - **Best of Flux: Style Enhancing LoRA** (Weight: 0.06) ([Model Link](https://civitai.com/models/821668))
52
 
53
  **Distillation Used:**
54
+ - **Flux distilled lora** (Weight: Hyper - .65 | Turbo - 0.50) ([Model Link](https://civitai.com/models/977247/flux-distilled-lora))
55
 
56
  3. **GGUF Quantized Models (Q8_0)**:
57
+ - [flux.1-lite-8B-alpha-Hyper-8.Steps-Detail.Plus-Distilled-V2-Q8_0](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/Lite-8B-Plus/Distilled/GGUF/Q8_0/flux.1-lite-8B-alpha-Hyper-8.Steps-Detail.Plus-Distilled-V2-Q8_0.gguf)
58
+ - [flux.1-lite-8B-alpha-Hyper-16.Steps-Detail.Plus-Distilled-V2-Q8_0](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/Lite-8B-Plus/Distilled/GGUF/Q8_0/flux.1-lite-8B-alpha-Hyper-16.Steps-Detail.Plus-Distilled-V2-Q8_0.gguf)
59
+ - [flux.1-lite-8B-alpha-Turbo-8.Steps-Detail.Plus-Distilled-V2-Q8_0](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/Lite-8B-Plus/Distilled/GGUF/Q8_0/flux.1-lite-8B-alpha-Turbo-8.Steps-Detail.Plus-Distilled-V2-Q8_0.gguf)
60
 
61
  **====================================**
62