Kquant03 commited on
Commit
9127a81
1 Parent(s): cc7e677

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -27,12 +27,12 @@ A Convex frankenMoE. Created via improving the original Seraphim script. The mod
27
 
28
  | Name | Quant method | Bits | Size | Max RAM required | Use case |
29
  | ---- | ---- | ---- | ---- | ---- | ----- |
30
- | [Q2_K Tiny](https://huggingface.co/Kquant03/FrankenDPO-4x7B-GGUF/blob/main/ggml-model-q2_k.gguf) | Q2_K | 2 | 23.4 GB| 25.4 GB | smallest, significant quality loss - not recommended for most purposes |
31
- | [Q3_K_M](https://huggingface.co/Kquant03/FrankenDPO-4x7B-GGUF/blob/main/ggml-model-q3_k_m.gguf) | Q3_K_M | 3 | 30.5 GB| 32.5 GB | very small, high quality loss |
32
- | [Q4_0](https://huggingface.co/Kquant03/FrankenDPO-4x7B-GGUF/blob/main/ggml-model-q4_0.gguf) | Q4_0 | 4 | 39.6 GB| 41.6 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
33
- | [Q4_K_M](https://huggingface.co/Kquant03/FrankenDPO-4x7B-GGUF/blob/main/ggml-model-q4_k_m.gguf) | Q4_K_M | 4 | ~39.6 GB| ~41.6 GB | medium, balanced quality - recommended |
34
- | [Q5_0](https://huggingface.co/Kquant03/FrankenDPO-4x7B-GGUF/blob/main/ggml-model-q5_0.gguf) | Q5_0 | 5 | 48.2 GB| 50.2 GB | legacy; large, balanced quality |
35
- | [Q5_K_M](https://huggingface.co/Kquant03/FrankenDPO-4x7B-GGUF/blob/main/ggml-model-q5_k_m.gguf) | Q5_K_M | 5 | ~48.2 GB| ~50.2 GB | large, balanced quality - recommended |
36
  # "[What is a Mixture of Experts (MoE)?](https://huggingface.co/blog/moe)"
37
  ### (from the MistralAI papers...click the quoted question above to navigate to it directly.)
38
 
 
27
 
28
  | Name | Quant method | Bits | Size | Max RAM required | Use case |
29
  | ---- | ---- | ---- | ---- | ---- | ----- |
30
+ | [Q2_K Tiny](https://huggingface.co/Kquant03/BurningBruce-SOLAR-8x10.7B-GGUF/blob/main/ggml-model-q2_k.gguf) | Q2_K | 2 | 23.4 GB| 25.4 GB | smallest, significant quality loss - not recommended for most purposes |
31
+ | [Q3_K_M](https://huggingface.co/Kquant03/BurningBruce-SOLAR-8x10.7B-GGUF/blob/main/ggml-model-q3_k_m.gguf) | Q3_K_M | 3 | 30.5 GB| 32.5 GB | very small, high quality loss |
32
+ | [Q4_0](https://huggingface.co/Kquant03/BurningBruce-SOLAR-8x10.7B-GGUF/blob/main/ggml-model-q4_0.gguf) | Q4_0 | 4 | 39.6 GB| 41.6 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
33
+ | [Q4_K_M](https://huggingface.co/Kquant03/BurningBruce-SOLAR-8x10.7B-GGUF/blob/main/ggml-model-q4_k_m.gguf) | Q4_K_M | 4 | ~39.6 GB| ~41.6 GB | medium, balanced quality - recommended |
34
+ | [Q5_0](https://huggingface.co/Kquant03/BurningBruce-SOLAR-8x10.7B-GGUF/blob/main/ggml-model-q5_0.gguf) | Q5_0 | 5 | 48.2 GB| 50.2 GB | legacy; large, balanced quality |
35
+ | [Q5_K_M](https://huggingface.co/Kquant03/BurningBruce-SOLAR-8x10.7B-GGUF/blob/main/ggml-model-q5_k_m.gguf) | Q5_K_M | 5 | ~48.2 GB| ~50.2 GB | large, balanced quality - recommended |
36
  # "[What is a Mixture of Experts (MoE)?](https://huggingface.co/blog/moe)"
37
  ### (from the MistralAI papers...click the quoted question above to navigate to it directly.)
38