apepkuss79
commited on
Commit
•
1b1045f
1
Parent(s):
ae2cb03
Update README.md
Browse files
README.md
CHANGED
@@ -65,7 +65,7 @@ tags:
|
|
65 |
--ctx-size 128000
|
66 |
```
|
67 |
|
68 |
-
## Quantized GGUF Models
|
69 |
|
70 |
| Name | Quant method | Bits | Size | Use case |
|
71 |
| ---- | ---- | ---- | ---- | ----- |
|
@@ -77,11 +77,11 @@ tags:
|
|
77 |
| [Qwen2.5-32B-Instruct-Q4_K_M.gguf](https://huggingface.co/second-state/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q4_K_M.gguf) | Q4_K_M | 4 | 8.99 GB| medium, balanced quality - recommended |
|
78 |
| [Qwen2.5-32B-Instruct-Q4_K_S.gguf](https://huggingface.co/second-state/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q4_K_S.gguf) | Q4_K_S | 4 | 8.57 GB| small, greater quality loss |
|
79 |
| [Qwen2.5-32B-Instruct-Q5_0.gguf](https://huggingface.co/second-state/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q5_0.gguf) | Q5_0 | 5 | 10.3 GB| legacy; medium, balanced quality - prefer using Q4_K_M |
|
80 |
-
| [Qwen2.5-32B-Instruct-Q5_K_M.gguf](https://huggingface.co/second-state/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q5_K_M.gguf) | Q5_K_M | 5 |
|
81 |
| [Qwen2.5-32B-Instruct-Q5_K_S.gguf](https://huggingface.co/second-state/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q5_K_S.gguf) | Q5_K_S | 5 | 10.3 GB| large, low quality loss - recommended |
|
82 |
| [Qwen2.5-32B-Instruct-Q6_K.gguf](https://huggingface.co/second-state/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q6_K.gguf) | Q6_K | 6 | 12.1 GB| very large, extremely low quality loss |
|
83 |
| [Qwen2.5-32B-Instruct-Q8_0.gguf](https://huggingface.co/second-state/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q8_0.gguf) | Q8_0 | 8 | 15.1 GB| very large, extremely low quality loss - not recommended |
|
84 |
-
| [Qwen2.5-32B-Instruct-f16.gguf](https://huggingface.co/second-state/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-f16.gguf) | f16 | 16 | 29.5 GB| |
|
85 |
|
86 |
*Quantized with llama.cpp b3751*
|
87 |
|
|
|
65 |
--ctx-size 128000
|
66 |
```
|
67 |
|
68 |
+
<!-- ## Quantized GGUF Models
|
69 |
|
70 |
| Name | Quant method | Bits | Size | Use case |
|
71 |
| ---- | ---- | ---- | ---- | ----- |
|
|
|
77 |
| [Qwen2.5-32B-Instruct-Q4_K_M.gguf](https://huggingface.co/second-state/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q4_K_M.gguf) | Q4_K_M | 4 | 8.99 GB| medium, balanced quality - recommended |
|
78 |
| [Qwen2.5-32B-Instruct-Q4_K_S.gguf](https://huggingface.co/second-state/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q4_K_S.gguf) | Q4_K_S | 4 | 8.57 GB| small, greater quality loss |
|
79 |
| [Qwen2.5-32B-Instruct-Q5_0.gguf](https://huggingface.co/second-state/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q5_0.gguf) | Q5_0 | 5 | 10.3 GB| legacy; medium, balanced quality - prefer using Q4_K_M |
|
80 |
+
| [Qwen2.5-32B-Instruct-Q5_K_M.gguf](https://huggingface.co/second-state/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q5_K_M.gguf) | Q5_K_M | 5 | 23.3 GB| large, very low quality loss - recommended |
|
81 |
| [Qwen2.5-32B-Instruct-Q5_K_S.gguf](https://huggingface.co/second-state/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q5_K_S.gguf) | Q5_K_S | 5 | 10.3 GB| large, low quality loss - recommended |
|
82 |
| [Qwen2.5-32B-Instruct-Q6_K.gguf](https://huggingface.co/second-state/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q6_K.gguf) | Q6_K | 6 | 12.1 GB| very large, extremely low quality loss |
|
83 |
| [Qwen2.5-32B-Instruct-Q8_0.gguf](https://huggingface.co/second-state/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q8_0.gguf) | Q8_0 | 8 | 15.1 GB| very large, extremely low quality loss - not recommended |
|
84 |
+
| [Qwen2.5-32B-Instruct-f16.gguf](https://huggingface.co/second-state/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-f16.gguf) | f16 | 16 | 29.5 GB| | -->
|
85 |
|
86 |
*Quantized with llama.cpp b3751*
|
87 |
|