New GGMLv3 format for breaking llama.cpp change May 19th commit 2d5db48
Browse files
README.md
CHANGED
@@ -31,20 +31,20 @@ I have the following Koala model repositories available:
|
|
31 |
* [GPTQ quantized 4bit 7B model in `pt` and `safetensors` formats](https://huggingface.co/TheBloke/koala-7B-GPTQ-4bit-128g)
|
32 |
* [4-bit, 5-bit and 8-bit GGML models for `llama.cpp`](https://huggingface.co/TheBloke/koala-7B-GGML)
|
33 |
|
34 |
-
## REQUIRES LATEST LLAMA.CPP (May
|
35 |
|
36 |
-
llama.cpp recently made
|
37 |
|
38 |
-
I have
|
39 |
|
40 |
-
|
41 |
|
42 |
## How to run in `llama.cpp`
|
43 |
|
44 |
I use the following command line; adjust for your tastes and needs:
|
45 |
|
46 |
```
|
47 |
-
./main -t 18 -m koala-7B-4bit-128g.
|
48 |
USER: <PROMPT GOES HERE>
|
49 |
GPT:"
|
50 |
```
|
@@ -98,4 +98,4 @@ The model weights are intended for academic research only, subject to the
|
|
98 |
[model License of LLaMA](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md),
|
99 |
[Terms of Use of the data generated by OpenAI](https://openai.com/policies/terms-of-use),
|
100 |
and [Privacy Practices of ShareGPT](https://chrome.google.com/webstore/detail/sharegpt-share-your-chatg/daiacboceoaocpibfodeljbdfacokfjb).
|
101 |
-
Any other usage of the model weights, including but not limited to commercial usage, is strictly prohibited.
|
|
|
31 |
* [GPTQ quantized 4bit 7B model in `pt` and `safetensors` formats](https://huggingface.co/TheBloke/koala-7B-GPTQ-4bit-128g)
|
32 |
* [4-bit, 5-bit and 8-bit GGML models for `llama.cpp`](https://huggingface.co/TheBloke/koala-7B-GGML)
|
33 |
|
34 |
+
## THE FILES IN MAIN BRANCH REQUIRES LATEST LLAMA.CPP (May 19th 2023 - commit 2d5db48)!
|
35 |
|
36 |
+
llama.cpp recently made another breaking change to its quantisation methods - https://github.com/ggerganov/llama.cpp/pull/1508
|
37 |
|
38 |
+
I have quantised the GGML files in this repo with the latest version. Therefore you will require llama.cpp compiled on May 19th or later (commit `2d5db48` or later) to use them.
|
39 |
|
40 |
+
For files compatible with the previous version of llama.cpp, please see branch `previous_llama_ggmlv2`.
|
41 |
|
42 |
## How to run in `llama.cpp`
|
43 |
|
44 |
I use the following command line; adjust for your tastes and needs:
|
45 |
|
46 |
```
|
47 |
+
./main -t 18 -m koala-7B-4bit-128g.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "BEGINNING OF CONVERSATION:
|
48 |
USER: <PROMPT GOES HERE>
|
49 |
GPT:"
|
50 |
```
|
|
|
98 |
[model License of LLaMA](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md),
|
99 |
[Terms of Use of the data generated by OpenAI](https://openai.com/policies/terms-of-use),
|
100 |
and [Privacy Practices of ShareGPT](https://chrome.google.com/webstore/detail/sharegpt-share-your-chatg/daiacboceoaocpibfodeljbdfacokfjb).
|
101 |
+
Any other usage of the model weights, including but not limited to commercial usage, is strictly prohibited.
|