FantasiaFoundry
commited on
Commit
•
907e1a6
1
Parent(s):
6427dd6
Update README.md
Browse files
README.md
CHANGED
@@ -15,7 +15,7 @@ tags:
|
|
15 |
|
16 |
> [!WARNING]
|
17 |
> **Warning:** <br>
|
18 |
-
> For **Llama-3** models, at the moment, you have to use `gguf-imat-llama-3.py` and replace the config files with the ones in the [**ChaoticNeutrals/Llama3-Corrections**](https://huggingface.co/ChaoticNeutrals/Llama3-Corrections/tree/main) repository to properly quant and generate the imatrix data.
|
19 |
|
20 |
Pull Requests with your own features and improvements to this script are always welcome.
|
21 |
|
|
|
15 |
|
16 |
> [!WARNING]
|
17 |
> **Warning:** <br>
|
18 |
+
> For **Llama-3** models that don't follow the ChatML, Alpaca, Vicuna and other conventional formats, at the moment, you have to use `gguf-imat-llama-3.py` and replace the config files with the ones in the [**ChaoticNeutrals/Llama3-Corrections**](https://huggingface.co/ChaoticNeutrals/Llama3-Corrections/tree/main) repository to properly quant and generate the imatrix data.
|
19 |
|
20 |
Pull Requests with your own features and improvements to this script are always welcome.
|
21 |
|