New GGMLv3 format for breaking llama.cpp change May 19th commit 2d5db48
Browse files
README.md
CHANGED
@@ -18,32 +18,33 @@ It was created by merging the LoRA provided in the above repo with the original
|
|
18 |
|
19 |
The files in this repo were then quantized to 4bit and 5bit for use with [llama.cpp](https://github.com/ggerganov/llama.cpp).
|
20 |
|
21 |
-
## REQUIRES LATEST LLAMA.CPP (May
|
22 |
|
23 |
-
llama.cpp recently made
|
24 |
|
25 |
-
I have
|
26 |
|
27 |
-
|
28 |
|
29 |
## Provided files
|
30 |
| Name | Quant method | Bits | Size | RAM required | Use case |
|
31 |
| ---- | ---- | ---- | ---- | ---- | ----- |
|
32 |
-
`gpt4-alpaca-lora-30B.
|
33 |
-
`gpt4-alpaca-lora-30B.
|
34 |
-
`gpt4-alpaca-lora-30B.
|
|
|
35 |
|
36 |
## How to run in `llama.cpp`
|
37 |
|
38 |
I use the following command line; adjust for your tastes and needs:
|
39 |
|
40 |
```
|
41 |
-
./main -t 18 -m gpt4-alpaca-lora-30B.
|
42 |
### Instruction:
|
43 |
Write a story about llamas
|
44 |
### Response:"
|
45 |
```
|
46 |
-
Change `-t 18` to the number of physical CPU cores you have. For example if your system has
|
47 |
|
48 |
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
|
49 |
|
@@ -53,7 +54,7 @@ Create a model directory that has `ggml` (case sensitive) in its name. Then put
|
|
53 |
|
54 |
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
|
55 |
|
56 |
-
Note
|
57 |
|
58 |
# Original GPT4 Alpaca Lora model card
|
59 |
|
|
|
18 |
|
19 |
The files in this repo were then quantized to 4bit and 5bit for use with [llama.cpp](https://github.com/ggerganov/llama.cpp).
|
20 |
|
21 |
+
## THE FILES IN MAIN BRANCH REQUIRES LATEST LLAMA.CPP (May 19th 2023 - commit 2d5db48)!
|
22 |
|
23 |
+
llama.cpp recently made another breaking change to its quantisation methods - https://github.com/ggerganov/llama.cpp/pull/1508
|
24 |
|
25 |
+
I have quantised the GGML files in this repo with the latest version. Therefore you will require llama.cpp compiled on May 19th or later (commit `2d5db48` or later) to use them.
|
26 |
|
27 |
+
For files compatible with the previous version of llama.cpp, please see branch `previous_llama_ggmlv2`.
|
28 |
|
29 |
## Provided files
|
30 |
| Name | Quant method | Bits | Size | RAM required | Use case |
|
31 |
| ---- | ---- | ---- | ---- | ---- | ----- |
|
32 |
+
`gpt4-alpaca-lora-30B.ggmlv3.q4_0.bin` | q4_0 | 4bit | 20.3GB | 23GB | 4bit. |
|
33 |
+
`gpt4-alpaca-lora-30B.ggmlv3.q4_1.bin` | q4_1 | 4bit | 22.4GB | 25GB | 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
|
34 |
+
`gpt4-alpaca-lora-30B.ggmlv3.q5_0.bin` | q5_0 | 5bit | 22.4GB | 25GB | 5bit. Higher accuracy, higher resource usage, slower inference. |
|
35 |
+
`gpt4-alpaca-lora-30B.ggmlv3.q5_1.bin` | q5_1 | 5bit | 24.4GB | 27GB | 5bit. Even higher accuracy and resource usage, and slower inference. |
|
36 |
|
37 |
## How to run in `llama.cpp`
|
38 |
|
39 |
I use the following command line; adjust for your tastes and needs:
|
40 |
|
41 |
```
|
42 |
+
./main -t 18 -m gpt4-alpaca-lora-30B.ggmlv3.q4_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.
|
43 |
### Instruction:
|
44 |
Write a story about llamas
|
45 |
### Response:"
|
46 |
```
|
47 |
+
Change `-t 18` to the number of physical CPU cores you have. For example if your system has 6 cores/12 threads, use `-t 6`.
|
48 |
|
49 |
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
|
50 |
|
|
|
54 |
|
55 |
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
|
56 |
|
57 |
+
Note: at this time text-generation-webui may not support the new May 19th llama.cpp quantisation methods for q4_0, q4_1 and q8_0 files.
|
58 |
|
59 |
# Original GPT4 Alpaca Lora model card
|
60 |
|