Update README.md
Browse files
README.md
CHANGED
@@ -18,6 +18,10 @@ Only the second layers of both MLPs in each MMDiT block of SD3.5 Large models ha
|
|
18 |
|
19 |
## Files:
|
20 |
|
|
|
|
|
|
|
|
|
21 |
### Mixed Types:
|
22 |
|
23 |
- [sd3.5_large_turbo-q2_k_4_0.gguf](https://huggingface.co/stduhpf/SD3.5-Large-Turbo-GGUF-mixed-sdcpp/blob/main/sd3.5_large_turbo-q2_k_4_0.gguf): Smallest quantization yet. Use this if you can't afford anything bigger
|
@@ -28,8 +32,6 @@ Only the second layers of both MLPs in each MMDiT block of SD3.5 Large models ha
|
|
28 |
|
29 |
### Legacy types:
|
30 |
|
31 |
-
- [sd3.5_large_turbo-iq4_nl.gguf](https://huggingface.co/stduhpf/SD3.5-Large-Turbo-GGUF-mixed-sdcpp/blob/main/legacy/sd3.5_large_turbo-iq4_nl.gguf): Same size as q4_k_4_0 and q4_0, runs faster than q4_k_4_0 (on Vulkan at least), and provides better image quality. Recommended
|
32 |
-
|
33 |
- [sd3.5_large_turbo-q4_0.gguf](https://huggingface.co/stduhpf/SD3.5-Large-Turbo-GGUF-mixed-sdcpp/blob/main/legacy/sd3.5_large_turbo-q4_0.gguf): Same size as q4_k_4_0, Not recommended (use iqk_nl q4_k_4_0 instead)
|
34 |
- [sd3.5_large_turbo-q4_1.gguf](https://huggingface.co/stduhpf/SD3.5-Large-Turbo-GGUF-mixed-sdcpp/blob/main/legacy/sd3.5_large_turbo-q4_1.gguf): Not recommended (q4_k_4_1 is better and smaller)
|
35 |
- [sd3.5_large_turbo-q5_0.gguf](https://huggingface.co/stduhpf/SD3.5-Large-Turbo-GGUF-mixed-sdcpp/blob/main/legacy/sd3.5_large_turbo-q5_0.gguf): Barely better and bigger than q4_k_5_0
|
|
|
18 |
|
19 |
## Files:
|
20 |
|
21 |
+
### Non-Linear Type:
|
22 |
+
|
23 |
+
- [sd3.5_large_turbo-iq4_nl.gguf](https://huggingface.co/stduhpf/SD3.5-Large-Turbo-GGUF-mixed-sdcpp/blob/main/legacy/sd3.5_large_turbo-iq4_nl.gguf): Same size as q4_k_4_0 and q4_0, runs faster than q4_k_4_0 (on Vulkan at least), and provides better image quality. Recommended
|
24 |
+
|
25 |
### Mixed Types:
|
26 |
|
27 |
- [sd3.5_large_turbo-q2_k_4_0.gguf](https://huggingface.co/stduhpf/SD3.5-Large-Turbo-GGUF-mixed-sdcpp/blob/main/sd3.5_large_turbo-q2_k_4_0.gguf): Smallest quantization yet. Use this if you can't afford anything bigger
|
|
|
32 |
|
33 |
### Legacy types:
|
34 |
|
|
|
|
|
35 |
- [sd3.5_large_turbo-q4_0.gguf](https://huggingface.co/stduhpf/SD3.5-Large-Turbo-GGUF-mixed-sdcpp/blob/main/legacy/sd3.5_large_turbo-q4_0.gguf): Same size as q4_k_4_0, Not recommended (use iqk_nl q4_k_4_0 instead)
|
36 |
- [sd3.5_large_turbo-q4_1.gguf](https://huggingface.co/stduhpf/SD3.5-Large-Turbo-GGUF-mixed-sdcpp/blob/main/legacy/sd3.5_large_turbo-q4_1.gguf): Not recommended (q4_k_4_1 is better and smaller)
|
37 |
- [sd3.5_large_turbo-q5_0.gguf](https://huggingface.co/stduhpf/SD3.5-Large-Turbo-GGUF-mixed-sdcpp/blob/main/legacy/sd3.5_large_turbo-q5_0.gguf): Barely better and bigger than q4_k_5_0
|