Update README.md
Browse files
README.md
CHANGED
@@ -8,6 +8,7 @@ This version has then been quantized to 4-bit using [GPTQ-for-LLaMa](https://git
|
|
8 |
|
9 |
## My Koala repos
|
10 |
I have the following Koala model repositories available:
|
|
|
11 |
**13B models:**
|
12 |
* [Unquantized 13B model in HF format](https://huggingface.co/TheBloke/koala-13B-HF)
|
13 |
* [GPTQ quantized 4bit 13B model in `pt` and `safetensors` formats](https://huggingface.co/TheBloke/koala-13B-GPTQ-4bit-128g)
|
|
|
8 |
|
9 |
## My Koala repos
|
10 |
I have the following Koala model repositories available:
|
11 |
+
|
12 |
**13B models:**
|
13 |
* [Unquantized 13B model in HF format](https://huggingface.co/TheBloke/koala-13B-HF)
|
14 |
* [GPTQ quantized 4bit 13B model in `pt` and `safetensors` formats](https://huggingface.co/TheBloke/koala-13B-GPTQ-4bit-128g)
|