Update README.md
Browse files
README.md
CHANGED
@@ -10,11 +10,11 @@ tags:
|
|
10 |
- gpt4
|
11 |
inference: false
|
12 |
---
|
13 |
-
# GPT4 Alpaca
|
14 |
|
15 |
This is a 4-bit GPTQ version of the [Chansung GPT4 Alpaca 30B LoRA model](https://huggingface.co/chansung/gpt4-alpaca-lora-30b).
|
16 |
|
17 |
-
It was created by merging the
|
18 |
|
19 |
It was then quantized to 4bit, groupsize 128g, using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
|
20 |
|
|
|
10 |
- gpt4
|
11 |
inference: false
|
12 |
---
|
13 |
+
# GPT4 Alpaca LoRA 30B - GPTQ 4bit 128g
|
14 |
|
15 |
This is a 4-bit GPTQ version of the [Chansung GPT4 Alpaca 30B LoRA model](https://huggingface.co/chansung/gpt4-alpaca-lora-30b).
|
16 |
|
17 |
+
It was created by merging the LoRA provided in the above repo with the original Llama 30B model, producing unquantised model [GPT4-Alpaca-LoRA-30B-HF]](https://huggingface.co/TheBloke/gpt4-alpaca-lora-30b-HF)
|
18 |
|
19 |
It was then quantized to 4bit, groupsize 128g, using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
|
20 |
|