Joseph717171
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -9,6 +9,8 @@ author: froggeric (https://huggingface.co/datasets/froggeric/imatrix/edit/main/R
|
|
9 |
# Note: All uploaded imatrices to this repo are pre-computed, and are, therefore, ready to be used in llama.cpp's quantization process.
|
10 |
|
11 |
# Note: Imatrices uploaded to this repo follow the following naming convention: model-name_training-dataset.imatrix (hyphens are purely used in this example to enhance readability...)
|
|
|
|
|
12 |
```
|
13 |
llama.cpp % ./quantize --imatrix path_to_imatrix path_to_model/ggml-model-f16.gguf model_name-QuantType.gguf QuantType
|
14 |
```
|
|
|
9 |
# Note: All uploaded imatrices to this repo are pre-computed, and are, therefore, ready to be used in llama.cpp's quantization process.
|
10 |
|
11 |
# Note: Imatrices uploaded to this repo follow the following naming convention: model-name_training-dataset.imatrix (hyphens are purely used in this example to enhance readability...)
|
12 |
+
|
13 |
+
# Just download the imatrix for your chosen LLM (Large Language Model), and quantize to your preferred QuantType. (Note the following example already assumes you converted your llm to GGUF)
|
14 |
```
|
15 |
llama.cpp % ./quantize --imatrix path_to_imatrix path_to_model/ggml-model-f16.gguf model_name-QuantType.gguf QuantType
|
16 |
```
|