Update README.md
Browse files
README.md
CHANGED
@@ -15,7 +15,7 @@ This repository contains a merge of two FLUX models, built upon the base model [
|
|
15 |
|
16 |
1. **GGUF Quantized Models (Q8_0)**:
|
17 |
- [flux.1-lite-8B-alpha-Hyper-16.Steps-Detail.Plus.Steps.Q8_0_quantized](https://huggingface.co/mdmachine/)
|
18 |
-
- flux.1-lite-8B-alpha-Hyper-8.Steps-Detail.Plus.Steps.Q8_0_quantized](https://huggingface.co/mdmachine/)
|
19 |
- [flux.1-lite-8B-alpha-Turbo-8.Steps-Detail.Plus.Steps.Q8_0_quantized](https://huggingface.co/mdmachine/)
|
20 |
|
21 |
2. **SAFETensors Format (fp8_34m3fn_fast)**:
|
|
|
15 |
|
16 |
1. **GGUF Quantized Models (Q8_0)**:
|
17 |
- [flux.1-lite-8B-alpha-Hyper-16.Steps-Detail.Plus.Steps.Q8_0_quantized](https://huggingface.co/mdmachine/)
|
18 |
+
- [flux.1-lite-8B-alpha-Hyper-8.Steps-Detail.Plus.Steps.Q8_0_quantized](https://huggingface.co/mdmachine/)
|
19 |
- [flux.1-lite-8B-alpha-Turbo-8.Steps-Detail.Plus.Steps.Q8_0_quantized](https://huggingface.co/mdmachine/)
|
20 |
|
21 |
2. **SAFETensors Format (fp8_34m3fn_fast)**:
|