bartowski commited on
Commit
802fa88
1 Parent(s): a0cbdba

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -44,10 +44,10 @@ Run them in [LM Studio](https://lmstudio.ai/)
44
  | [gemma-2-9b-it-abliterated-Q4_K_L.gguf](https://huggingface.co/bartowski/gemma-2-9b-it-abliterated-GGUF/blob/main/gemma-2-9b-it-abliterated-Q4_K_L.gguf) | Q4_K_L | 5.98GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
45
  | [gemma-2-9b-it-abliterated-Q4_K_M.gguf](https://huggingface.co/bartowski/gemma-2-9b-it-abliterated-GGUF/blob/main/gemma-2-9b-it-abliterated-Q4_K_M.gguf) | Q4_K_M | 5.76GB | false | Good quality, default size for must use cases, *recommended*. |
46
  | [gemma-2-9b-it-abliterated-Q4_K_S.gguf](https://huggingface.co/bartowski/gemma-2-9b-it-abliterated-GGUF/blob/main/gemma-2-9b-it-abliterated-Q4_K_S.gguf) | Q4_K_S | 5.48GB | false | Slightly lower quality with more space savings, *recommended*. |
 
47
  | [gemma-2-9b-it-abliterated-Q4_0_8_8.gguf](https://huggingface.co/bartowski/gemma-2-9b-it-abliterated-GGUF/blob/main/gemma-2-9b-it-abliterated-Q4_0_8_8.gguf) | Q4_0_8_8 | 5.44GB | false | Optimized for ARM inference. Requires 'sve' support (see link below). |
48
  | [gemma-2-9b-it-abliterated-Q4_0_4_8.gguf](https://huggingface.co/bartowski/gemma-2-9b-it-abliterated-GGUF/blob/main/gemma-2-9b-it-abliterated-Q4_0_4_8.gguf) | Q4_0_4_8 | 5.44GB | false | Optimized for ARM inference. Requires 'i8mm' support (see link below). |
49
  | [gemma-2-9b-it-abliterated-Q4_0_4_4.gguf](https://huggingface.co/bartowski/gemma-2-9b-it-abliterated-GGUF/blob/main/gemma-2-9b-it-abliterated-Q4_0_4_4.gguf) | Q4_0_4_4 | 5.44GB | false | Optimized for ARM inference. Should work well on all ARM chips, pick this if you're unsure. |
50
- | [gemma-2-9b-it-abliterated-Q4_0.gguf](https://huggingface.co/bartowski/gemma-2-9b-it-abliterated-GGUF/blob/main/gemma-2-9b-it-abliterated-Q4_0.gguf) | Q4_0 | 5.44GB | false | Legacy format, offers online repacking for ARM and AVX inference. |
51
  | [gemma-2-9b-it-abliterated-Q3_K_XL.gguf](https://huggingface.co/bartowski/gemma-2-9b-it-abliterated-GGUF/blob/main/gemma-2-9b-it-abliterated-Q3_K_XL.gguf) | Q3_K_XL | 5.35GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
52
  | [gemma-2-9b-it-abliterated-IQ4_XS.gguf](https://huggingface.co/bartowski/gemma-2-9b-it-abliterated-GGUF/blob/main/gemma-2-9b-it-abliterated-IQ4_XS.gguf) | IQ4_XS | 5.18GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
53
  | [gemma-2-9b-it-abliterated-Q3_K_L.gguf](https://huggingface.co/bartowski/gemma-2-9b-it-abliterated-GGUF/blob/main/gemma-2-9b-it-abliterated-Q3_K_L.gguf) | Q3_K_L | 5.13GB | false | Lower quality but usable, good for low RAM availability. |
 
44
  | [gemma-2-9b-it-abliterated-Q4_K_L.gguf](https://huggingface.co/bartowski/gemma-2-9b-it-abliterated-GGUF/blob/main/gemma-2-9b-it-abliterated-Q4_K_L.gguf) | Q4_K_L | 5.98GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
45
  | [gemma-2-9b-it-abliterated-Q4_K_M.gguf](https://huggingface.co/bartowski/gemma-2-9b-it-abliterated-GGUF/blob/main/gemma-2-9b-it-abliterated-Q4_K_M.gguf) | Q4_K_M | 5.76GB | false | Good quality, default size for must use cases, *recommended*. |
46
  | [gemma-2-9b-it-abliterated-Q4_K_S.gguf](https://huggingface.co/bartowski/gemma-2-9b-it-abliterated-GGUF/blob/main/gemma-2-9b-it-abliterated-Q4_K_S.gguf) | Q4_K_S | 5.48GB | false | Slightly lower quality with more space savings, *recommended*. |
47
+ | [gemma-2-9b-it-abliterated-Q4_0.gguf](https://huggingface.co/bartowski/gemma-2-9b-it-abliterated-GGUF/blob/main/gemma-2-9b-it-abliterated-Q4_0.gguf) | Q4_0 | 5.46GB | false | Legacy format, offers online repacking for ARM and AVX inference. |
48
  | [gemma-2-9b-it-abliterated-Q4_0_8_8.gguf](https://huggingface.co/bartowski/gemma-2-9b-it-abliterated-GGUF/blob/main/gemma-2-9b-it-abliterated-Q4_0_8_8.gguf) | Q4_0_8_8 | 5.44GB | false | Optimized for ARM inference. Requires 'sve' support (see link below). |
49
  | [gemma-2-9b-it-abliterated-Q4_0_4_8.gguf](https://huggingface.co/bartowski/gemma-2-9b-it-abliterated-GGUF/blob/main/gemma-2-9b-it-abliterated-Q4_0_4_8.gguf) | Q4_0_4_8 | 5.44GB | false | Optimized for ARM inference. Requires 'i8mm' support (see link below). |
50
  | [gemma-2-9b-it-abliterated-Q4_0_4_4.gguf](https://huggingface.co/bartowski/gemma-2-9b-it-abliterated-GGUF/blob/main/gemma-2-9b-it-abliterated-Q4_0_4_4.gguf) | Q4_0_4_4 | 5.44GB | false | Optimized for ARM inference. Should work well on all ARM chips, pick this if you're unsure. |
 
51
  | [gemma-2-9b-it-abliterated-Q3_K_XL.gguf](https://huggingface.co/bartowski/gemma-2-9b-it-abliterated-GGUF/blob/main/gemma-2-9b-it-abliterated-Q3_K_XL.gguf) | Q3_K_XL | 5.35GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
52
  | [gemma-2-9b-it-abliterated-IQ4_XS.gguf](https://huggingface.co/bartowski/gemma-2-9b-it-abliterated-GGUF/blob/main/gemma-2-9b-it-abliterated-IQ4_XS.gguf) | IQ4_XS | 5.18GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
53
  | [gemma-2-9b-it-abliterated-Q3_K_L.gguf](https://huggingface.co/bartowski/gemma-2-9b-it-abliterated-GGUF/blob/main/gemma-2-9b-it-abliterated-Q3_K_L.gguf) | Q3_K_L | 5.13GB | false | Lower quality but usable, good for low RAM availability. |