Update README.md
Browse files
README.md
CHANGED
@@ -20,12 +20,11 @@ Using llama.cpp fork: [https://github.com/fairydreaming/llama.cpp/tree/deepseek-
|
|
20 |
- Merged GGUF should appear
|
21 |
|
22 |
# Quants:
|
23 |
-
- bf16 (
|
24 |
-
- f16 (after q2_k, but just use bf16) [estimated size: ~400gb]
|
25 |
- f32 (may require some time to upload, after q8_0) [estimated size: ~800gb]
|
26 |
- q8_0 (after bf16) [estimated size: 233.27gb]
|
27 |
-
- q4_k_m (after q8_0) [estimated size: 133.10gb]
|
28 |
-
- q2_k (after q4_k_m) [estimated size: ~65gb]
|
29 |
-
- q3_k_s (low priority) [estimated size: 96.05gb]
|
30 |
|
31 |
-
If quantize.exe supports it I will make RTN quants.
|
|
|
20 |
- Merged GGUF should appear
|
21 |
|
22 |
# Quants:
|
23 |
+
- bf16 (finished, currently splitting and uploading) [size: 439gb]
|
|
|
24 |
- f32 (may require some time to upload, after q8_0) [estimated size: ~800gb]
|
25 |
- q8_0 (after bf16) [estimated size: 233.27gb]
|
26 |
+
- ~~q4_k_m (after q8_0) [estimated size: 133.10gb]~~
|
27 |
+
- ~~q2_k (after q4_k_m) [estimated size: ~65gb]~~
|
28 |
+
- ~~q3_k_s (low priority) [estimated size: 96.05gb]~~
|
29 |
|
30 |
+
If quantize.exe supports it I will make RTN quants (edit: it doesn't).
|