TheBloke commited on
Commit
393d6b2
1 Parent(s): de86b3d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -26
README.md CHANGED
@@ -26,41 +26,28 @@ This model requires the following prompt template:
26
  <|assistant|>:
27
  ```
28
 
29
- ## Provided files
30
- | Name | Quant method | Bits | Size | RAM required | Use case |
31
- | ---- | ---- | ---- | ---- | ---- | ----- |
32
- `OpenAssistant-30B-epoch7.ggml.q4_0.bin` | q4_0 | 4bit | 20.3GB | 23GB | Maximum compatibility |
33
- `OpenAssistant-30B-epoch7.ggml.q4_2.bin` | q4_2 | 4bit | 20.3GB | 23GB | Best compromise between resources, speed and quality |
34
- `OpenAssistant-30B-epoch7.ggml.q5_0.bin` | q5_0 | 5bit | 22.4GB | 25GB | Brand new 5bit method. Potentially higher quality than 4bit, at cost of slightly higher resources. |
35
- `OpenAssistant-30B-epoch7.ggml.q5_1.bin` | q5_1 | 5bit | 24.4GB | 27GB | Brand new 5bit method. Slightly higher resource usage than q5_0.|
36
-
37
- * The q4_0 file provides lower quality, but maximal compatibility. It will work with past and future versions of llama.cpp
38
- * The q4_2 file offers the best combination of performance and quality. This format is still subject to change and there may be compatibility issues, see below.
39
- * The q5_0 file is using brand new 5bit method released 26th April. This is the 5bit equivalent of q4_0.
40
- * The q5_1 file is using brand new 5bit method released 26th April. This is the 5bit equivalent of q4_1.
41
-
42
- ## q4_2 compatibility
43
 
44
- q4_2 is a relatively new 4bit quantisation method offering improved quality. However they are still under development and their formats are subject to change.
45
 
46
- In order to use these files you will need to use recent llama.cpp code. And it's possible that future updates to llama.cpp could require that these files are re-generated.
47
 
48
- If and when the q4_2 file no longer works with recent versions of llama.cpp I will endeavour to update it.
49
 
50
- If you want to ensure guaranteed compatibility with a wide range of llama.cpp versions, use the q4_0 file.
51
-
52
- ## q5_0 and q5_1 compatibility
53
-
54
- These new methods were released to llama.cpp on 26th April. You will need to pull the latest llama.cpp code and rebuild to be able to use them.
 
55
 
56
- Don't expect any third-party UIs/tools to support them yet.
57
 
58
  ## How to run in `llama.cpp`
59
 
60
  I use the following command line; adjust for your tastes and needs:
61
 
62
  ```
63
- ./main -t 18 -m OpenAssistant-30B-epoch7.ggml.q4_2.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|prompter|>Write a very story about llamas <|assistant|>:"
64
  ```
65
 
66
  Change `-t 18` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
@@ -71,9 +58,9 @@ GGML models can be loaded into text-generation-webui by installing the llama.cpp
71
 
72
  Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
73
 
74
- Note: at this time text-generation-webui will not support the new q5 quantisation methods.
75
 
76
- **Thireus** has written a [great guide on how to update it to the latest llama.cpp code](https://huggingface.co/TheBloke/wizardLM-7B-GGML/discussions/5) so that these files can be used in the UI.
77
 
78
  # Original model card
79
 
 
26
  <|assistant|>:
27
  ```
28
 
29
+ ## REQUIRES LATEST LLAMA.CPP (May 12th 2023 - commit b9fd7ee)!
 
 
 
 
 
 
 
 
 
 
 
 
 
30
 
31
+ llama.cpp recently made a breaking change to its quantisation methods.
32
 
33
+ I have re-quantised the GGML files in this repo. Therefore you will require llama.cpp compiled on May 12th or later (commit `b9fd7ee` or later) to use them.
34
 
35
+ The previous files, which will still work in older versions of llama.cpp, can be found in branch `previous_llama`.
36
 
37
+ ## Provided files
38
+ | Name | Quant method | Bits | Size | RAM required | Use case |
39
+ | ---- | ---- | ---- | ---- | ---- | ----- |
40
+ `OpenAssistant-30B-epoch7.ggml.q4_0.bin` | q4_0 | 4bit | 20.3GB | 23GB | 4-bit. |
41
+ `OpenAssistant-30B-epoch7.ggml.q5_0.bin` | q5_0 | 5bit | 22.4GB | 25GB | 5-bit. Higher accuracy, higher resource usage and slower inference. |
42
+ `OpenAssistant-30B-epoch7.ggml.q5_1.bin` | q5_1 | 5bit | 24.4GB | 27GB | 5-bit. Even higher accuracy, resource usage and slower inference. |
43
 
 
44
 
45
  ## How to run in `llama.cpp`
46
 
47
  I use the following command line; adjust for your tastes and needs:
48
 
49
  ```
50
+ ./main -t 18 -m OpenAssistant-30B-epoch7.ggml.q4_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|prompter|>Write a very story about llamas <|assistant|>:"
51
  ```
52
 
53
  Change `-t 18` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
 
58
 
59
  Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
60
 
61
+ Note: at this time text-generation-webui will likely not support the newly updated llama.cpp quantisation methods.
62
 
63
+ **Thireus** has written a [great guide on how to update it to the latest llama.cpp code](https://huggingface.co/TheBloke/wizardLM-7B-GGML/discussions/5) so that you can likely get support for the new quantisation methods sooner.
64
 
65
  # Original model card
66