TheBloke commited on
Commit
addb691
1 Parent(s): 54e0c6d

New GGMLv3 format for breaking llama.cpp change May 19th commit 2d5db48

Browse files
Files changed (1) hide show
  1. README.md +11 -10
README.md CHANGED
@@ -18,32 +18,33 @@ It was created by merging the LoRA provided in the above repo with the original
18
 
19
  The files in this repo were then quantized to 4bit and 5bit for use with [llama.cpp](https://github.com/ggerganov/llama.cpp).
20
 
21
- ## REQUIRES LATEST LLAMA.CPP (May 12th 2023 - commit b9fd7ee)!
22
 
23
- llama.cpp recently made a breaking change to its quantisation methods.
24
 
25
- I have re-quantised the GGML files in this repo. Therefore you will require llama.cpp compiled on May 12th or later (commit `b9fd7ee` or later) to use them.
26
 
27
- The previous files, which will still work in older versions of llama.cpp, can be found in branch `previous_llama`.
28
 
29
  ## Provided files
30
  | Name | Quant method | Bits | Size | RAM required | Use case |
31
  | ---- | ---- | ---- | ---- | ---- | ----- |
32
- `gpt4-alpaca-lora-30B.ggml.q4_0.bin` | q4_0 | 4bit | 20.3GB | 23GB | 4bit. |
33
- `gpt4-alpaca-lora-30B.ggml.q5_0.bin` | q5_0 | 5bit | 22.4GB | 25GB | 5bit. Higher accuracy, higher resource usage, slower inference. |
34
- `gpt4-alpaca-lora-30B.ggml.q5_1.bin` | q5_1 | 5bit | 24.4GB | 27GB | 5bit. Even higher accuracy and resource usage, and slower inference. |
 
35
 
36
  ## How to run in `llama.cpp`
37
 
38
  I use the following command line; adjust for your tastes and needs:
39
 
40
  ```
41
- ./main -t 18 -m gpt4-alpaca-lora-30B.GGML.q4_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.
42
  ### Instruction:
43
  Write a story about llamas
44
  ### Response:"
45
  ```
46
- Change `-t 18` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
47
 
48
  If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
49
 
@@ -53,7 +54,7 @@ Create a model directory that has `ggml` (case sensitive) in its name. Then put
53
 
54
  Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
55
 
56
- Note that as of May 12th, text-gen-ui likely won't support the newly updated GGML models until it's been updated.
57
 
58
  # Original GPT4 Alpaca Lora model card
59
 
 
18
 
19
  The files in this repo were then quantized to 4bit and 5bit for use with [llama.cpp](https://github.com/ggerganov/llama.cpp).
20
 
21
+ ## THE FILES IN MAIN BRANCH REQUIRES LATEST LLAMA.CPP (May 19th 2023 - commit 2d5db48)!
22
 
23
+ llama.cpp recently made another breaking change to its quantisation methods - https://github.com/ggerganov/llama.cpp/pull/1508
24
 
25
+ I have quantised the GGML files in this repo with the latest version. Therefore you will require llama.cpp compiled on May 19th or later (commit `2d5db48` or later) to use them.
26
 
27
+ For files compatible with the previous version of llama.cpp, please see branch `previous_llama_ggmlv2`.
28
 
29
  ## Provided files
30
  | Name | Quant method | Bits | Size | RAM required | Use case |
31
  | ---- | ---- | ---- | ---- | ---- | ----- |
32
+ `gpt4-alpaca-lora-30B.ggmlv3.q4_0.bin` | q4_0 | 4bit | 20.3GB | 23GB | 4bit. |
33
+ `gpt4-alpaca-lora-30B.ggmlv3.q4_1.bin` | q4_1 | 4bit | 22.4GB | 25GB | 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
34
+ `gpt4-alpaca-lora-30B.ggmlv3.q5_0.bin` | q5_0 | 5bit | 22.4GB | 25GB | 5bit. Higher accuracy, higher resource usage, slower inference. |
35
+ `gpt4-alpaca-lora-30B.ggmlv3.q5_1.bin` | q5_1 | 5bit | 24.4GB | 27GB | 5bit. Even higher accuracy and resource usage, and slower inference. |
36
 
37
  ## How to run in `llama.cpp`
38
 
39
  I use the following command line; adjust for your tastes and needs:
40
 
41
  ```
42
+ ./main -t 18 -m gpt4-alpaca-lora-30B.ggmlv3.q4_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.
43
  ### Instruction:
44
  Write a story about llamas
45
  ### Response:"
46
  ```
47
+ Change `-t 18` to the number of physical CPU cores you have. For example if your system has 6 cores/12 threads, use `-t 6`.
48
 
49
  If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
50
 
 
54
 
55
  Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
56
 
57
+ Note: at this time text-generation-webui may not support the new May 19th llama.cpp quantisation methods for q4_0, q4_1 and q8_0 files.
58
 
59
  # Original GPT4 Alpaca Lora model card
60