Update README.md
Browse files
README.md
CHANGED
@@ -6,8 +6,9 @@ This repo contains the weights of the Koala 7B model produced at Berkeley. It is
|
|
6 |
|
7 |
This version has then been quantized to 4bit using https://github.com/qwopqwop200/GPTQ-for-LLaMa
|
8 |
|
9 |
-
|
10 |
-
|
|
|
11 |
|
12 |
### WARNING: At the present time the GPTQ files uploaded here seem to be producing garbage output. It is not recommended to use them.
|
13 |
|
|
|
6 |
|
7 |
This version has then been quantized to 4bit using https://github.com/qwopqwop200/GPTQ-for-LLaMa
|
8 |
|
9 |
+
These other versions are also available:
|
10 |
+
* [Unquantized model in HF format](https://huggingface.co/TheBloke/koala-7B-HF)
|
11 |
+
* [Unquantized model in GGML format for llama.cpp](https://huggingface.co/TheBloke/koala-7b-ggml-unquantized)
|
12 |
|
13 |
### WARNING: At the present time the GPTQ files uploaded here seem to be producing garbage output. It is not recommended to use them.
|
14 |
|