added info and links to pre-computed matrix files
Browse files
README.md
CHANGED
@@ -117,9 +117,19 @@ cd <llama.cpp directory>
|
|
117 |
# --chunks 100 (recommended)
|
118 |
# --mlock : keep model in ram (only use if you had sufficient RAM for the whole fp16)
|
119 |
```
|
120 |
-
4. Use the generated matrix file to quantise the model
|
121 |
```
|
122 |
./quantize --matrix <output.matrix> <model_path>/ggml-model-f16.gguf <quantisation_level, eg:IQ4_XS>
|
123 |
```
|
124 |
Note: normal quantisation also benefits from using a matrix file. It also seem that a bigger input matrix is
|
125 |
-
better for higher quantisation.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
117 |
# --chunks 100 (recommended)
|
118 |
# --mlock : keep model in ram (only use if you had sufficient RAM for the whole fp16)
|
119 |
```
|
120 |
+
4. Use the generated matrix file to quantise the model (see further down for some pre-computed matrix files)
|
121 |
```
|
122 |
./quantize --matrix <output.matrix> <model_path>/ggml-model-f16.gguf <quantisation_level, eg:IQ4_XS>
|
123 |
```
|
124 |
Note: normal quantisation also benefits from using a matrix file. It also seem that a bigger input matrix is
|
125 |
+
better for higher quantisation.
|
126 |
+
|
127 |
+
### Pre-computed matrix files
|
128 |
+
|
129 |
+
Since generating a matrix files takes time and requires significant processing power and memory,
|
130 |
+
some kind folks have made available pre-computed matrix files. You can use those directly in the quantize process.
|
131 |
+
However, remember they can only be used for the specific model mentioned, and no other.
|
132 |
+
|
133 |
+
[Joseph717171/Imatrices](https://huggingface.co/Joseph717171/Imatrices) a growing list of many matrix files for 7B to 17B models
|
134 |
+
|
135 |
+
[ikawrakow/imatrix-from-wiki-train](https://huggingface.co/datasets/ikawrakow/imatrix-from-wiki-train) matrix files for base models (lama, mistral, nous-hermes, qwen) trained on `wiki.train.raw`
|