Update README.md
Browse files
README.md
CHANGED
@@ -52,6 +52,7 @@ Run them in [LM Studio](https://lmstudio.ai/)
|
|
52 |
| Filename | Quant type | File Size | Split | Description |
|
53 |
| -------- | ---------- | --------- | ----- | ----------- |
|
54 |
| [INTELLECT-1-Instruct-f32.gguf](https://huggingface.co/bartowski/INTELLECT-1-Instruct-GGUF/blob/main/INTELLECT-1-Instruct-f32.gguf) | f32 | 40.85GB | false | Full F32 weights. |
|
|
|
55 |
| [INTELLECT-1-Instruct-Q8_0.gguf](https://huggingface.co/bartowski/INTELLECT-1-Instruct-GGUF/blob/main/INTELLECT-1-Instruct-Q8_0.gguf) | Q8_0 | 10.86GB | false | Extremely high quality, generally unneeded but max available quant. |
|
56 |
| [INTELLECT-1-Instruct-Q6_K_L.gguf](https://huggingface.co/bartowski/INTELLECT-1-Instruct-GGUF/blob/main/INTELLECT-1-Instruct-Q6_K_L.gguf) | Q6_K_L | 8.64GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
|
57 |
| [INTELLECT-1-Instruct-Q6_K.gguf](https://huggingface.co/bartowski/INTELLECT-1-Instruct-GGUF/blob/main/INTELLECT-1-Instruct-Q6_K.gguf) | Q6_K | 8.39GB | false | Very high quality, near perfect, *recommended*. |
|
|
|
52 |
| Filename | Quant type | File Size | Split | Description |
|
53 |
| -------- | ---------- | --------- | ----- | ----------- |
|
54 |
| [INTELLECT-1-Instruct-f32.gguf](https://huggingface.co/bartowski/INTELLECT-1-Instruct-GGUF/blob/main/INTELLECT-1-Instruct-f32.gguf) | f32 | 40.85GB | false | Full F32 weights. |
|
55 |
+
| [INTELLECT-1-Instruct-f16.gguf](https://huggingface.co/bartowski/INTELLECT-1-Instruct-GGUF/blob/main/INTELLECT-1-Instruct-f16.gguf) | f16 | 20.40 GB | false | Full F16 weights. |
|
56 |
| [INTELLECT-1-Instruct-Q8_0.gguf](https://huggingface.co/bartowski/INTELLECT-1-Instruct-GGUF/blob/main/INTELLECT-1-Instruct-Q8_0.gguf) | Q8_0 | 10.86GB | false | Extremely high quality, generally unneeded but max available quant. |
|
57 |
| [INTELLECT-1-Instruct-Q6_K_L.gguf](https://huggingface.co/bartowski/INTELLECT-1-Instruct-GGUF/blob/main/INTELLECT-1-Instruct-Q6_K_L.gguf) | Q6_K_L | 8.64GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
|
58 |
| [INTELLECT-1-Instruct-Q6_K.gguf](https://huggingface.co/bartowski/INTELLECT-1-Instruct-GGUF/blob/main/INTELLECT-1-Instruct-Q6_K.gguf) | Q6_K | 8.39GB | false | Very high quality, near perfect, *recommended*. |
|