FLUX

====================================

FLUX Model Merges & Tweaks: Detail Enhancement and Acceleration

FLUX Model Merges & Tweaks: Detail Enhancement and Acceleration

This repository contains merged models, built upon the base models:

Detail enhancement and acceleration techniques have been applied, particularly optimized for NVIDIA 4XXX cards (maybe 3XXX too). The goal is to have high efficiency accelerated models with lower overhead.

The (Detail Plus! De-Re-Distilled | Detail Plus! Distilled) models can be used with CFG remaining at 1-ish. Also the baked-in accelerators mostly work as intended. These are products of workflow optimizations, as such they can change/appear/disappear at any point and time.

====================================

Detail Plus!

Detail Plus! - Built upon the base model Freepik's Flux.1-Lite-8B-alpha:

Detail Enhancement Used:

  • Style LORA - Extreme Detailer for FLUX.1-dev (Weight: 0.5) (Model Link)
  • Best of Flux: Style Enhancing LoRA (Weight: 0.25) (Model Link)
  1. SAFETensors Format (fp8_34m3fn_fast):

  2. GGUF Quantized Models (Q8_0):

====================================

Detail Plus! Distilled - Built upon the base model Freepik's Flux.1-Lite-8B-alpha:

Detail Enhancement Used:

  • Style LORA - Extreme Detailer for FLUX.1-dev (Weight: 0.15) (Model Link)
  • Best of Flux: Style Enhancing LoRA (Weight: 0.06) (Model Link)

Distillation Used:

  • Flux distilled lora (Weight: Hyper - 0.65 | Turbo - 0.50) (Model Link)
  1. SAFETensors Format (fp8_34m3fn_fast):

  2. GGUF Quantized Models (Q8_0):

====================================

Detail Plus! De-Re-Distilled

Detail Plus! De-Re-Distilled (Built upon the base model Flux-dev-de-distill:

Detail Enhancement Used:

  • Style LORA - Extreme Detailer for FLUX.1-dev (Weight: 0.15) (Model Link)
  • Best of Flux: Style Enhancing LoRA (Weight: 0.15) (Model Link)

Re-Distillation Used:

  • Flux distilled lora (Weight: -1.00) (Model Link)
  1. SAFETensors Format V2 (fp8_34m3fn_fast):

  2. GGUF Quantized Models (Q8_0):

  3. GGUF Quantized Models (Q6_K):

  4. GGUF Quantized Models (Q4_K_S):

====================================

Acceleration Credits:

====================================

Attribution and Licensing Notice:

The FLUX.1-dev Model is licensed by Black Forest Labs, Inc. under the FLUX.1-dev Non-Commercial License. Copyright Black Forest Labs, Inc.

Our model weights are released under the FLUX.1-dev Non-Commercial License.

This merge combines the strengths of these models, applying detail enhancement and acceleration techniques to create a unique and powerful AI model built upon Freepik's Flux.1-Lite-8B-alpha & Flux-dev-de-distill. We hope this contributes positively to the development of the NLP community!

Downloads last month
448
GGUF
Model size
8.16B params
Architecture
flux

4-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .

Model tree for mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration

Quantized
(19)
this model