Update README.md
Browse files
README.md
CHANGED
@@ -88,7 +88,7 @@ Refer to the Provided Files table below to see what files use which methods, and
|
|
88 |
| llama-65b.ggmlv3.q5_1.bin | q5_1 | 5 | 48.97 GB| 51.47 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
|
89 |
| llama-65b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 46.20 GB| 48.70 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
|
90 |
| llama-65b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 44.89 GB| 47.39 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
|
91 |
-
| llama-65b.ggmlv3.q6_K.bin
|
92 |
| llama-65b.ggmlv3.q8_0.bin | q8_0 | 8 | 69.370 GB | 71.87 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
|
93 |
|
94 |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
|
|
|
88 |
| llama-65b.ggmlv3.q5_1.bin | q5_1 | 5 | 48.97 GB| 51.47 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
|
89 |
| llama-65b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 46.20 GB| 48.70 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
|
90 |
| llama-65b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 44.89 GB| 47.39 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
|
91 |
+
| llama-65b.ggmlv3.q6_K.bin | q6_K |6 | 53.56 GB| 56.06 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
|
92 |
| llama-65b.ggmlv3.q8_0.bin | q8_0 | 8 | 69.370 GB | 71.87 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
|
93 |
|
94 |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
|