Update README.md
Browse files
README.md
CHANGED
@@ -16,13 +16,23 @@ This repo contains GGML files for for CPU inference using [llama.cpp](https://gi
|
|
16 |
## Provided files
|
17 |
| Name | Quant method | Bits | Size | RAM required | Use case |
|
18 |
| ---- | ---- | ---- | ---- | ---- | ----- |
|
19 |
-
`WizardLM-7B.GGML.q4_0.bin` | q4_0 | 4bit | 4.0GB | 6GB |
|
20 |
`WizardLM-7B.GGML.q4_2.bin` | q4_2 | 4bit | 4.0GB | 6GB | Best compromise between resources, speed and quality |
|
21 |
`WizardLM-7B.GGML.q4_3.bin` | q4_3 | 4bit | 4.8GB | 7GB | Maximum quality, high RAM requirements and slow inference |
|
22 |
|
23 |
-
* The q4_0 file
|
24 |
-
* The q4_2 file offers the best combination of performance and quality.
|
25 |
-
* The q4_3 file offers the highest quality, at the cost of increased RAM usage and slower inference speed.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
26 |
|
27 |
# Original model info
|
28 |
|
|
|
16 |
## Provided files
|
17 |
| Name | Quant method | Bits | Size | RAM required | Use case |
|
18 |
| ---- | ---- | ---- | ---- | ---- | ----- |
|
19 |
+
`WizardLM-7B.GGML.q4_0.bin` | q4_0 | 4bit | 4.0GB | 6GB | Maximum compatibility |
|
20 |
`WizardLM-7B.GGML.q4_2.bin` | q4_2 | 4bit | 4.0GB | 6GB | Best compromise between resources, speed and quality |
|
21 |
`WizardLM-7B.GGML.q4_3.bin` | q4_3 | 4bit | 4.8GB | 7GB | Maximum quality, high RAM requirements and slow inference |
|
22 |
|
23 |
+
* The q4_0 file provides lower quality, but maximal compatibility. It will work with past and future versions of llama.cpp
|
24 |
+
* The q4_2 file offers the best combination of performance and quality. This format is still subject to change and there may be compatibility issues, see below.
|
25 |
+
* The q4_3 file offers the highest quality, at the cost of increased RAM usage and slower inference speed. This format is still subject to change and there may be compatibility issues, see below.
|
26 |
+
|
27 |
+
## q4_2 and q4_3 compatibility
|
28 |
+
|
29 |
+
q4_2 and q4_3 are new 4bit quantisation methods offering improved quality. However they are still under development and their formats are subject to change.
|
30 |
+
|
31 |
+
In order to use these files you will need to use recent llama.cpp code. And it's possible that future updates to llama.cpp could require that these files are re-generated.
|
32 |
+
|
33 |
+
If and when the q4_2 and q4_3 files no longer work with recent versions of llama.cpp I will endeavour to update them.
|
34 |
+
|
35 |
+
If you want to ensure guaranteed compatibility with a wide range of llama.cpp versions, use the q4_0 file.
|
36 |
|
37 |
# Original model info
|
38 |
|