TheBloke commited on
Commit
27f8d3e
1 Parent(s): d24db15

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -14,7 +14,7 @@ inference: false
14
 
15
  This is a 4-bit GPTQ version of the [Chansung GPT4 Alpaca 30B LoRA model](https://huggingface.co/chansung/gpt4-alpaca-lora-30b).
16
 
17
- It was created by merging the LoRA provided in the above repo with the original Llama 30B model, producing unquantised model [GPT4-Alpaca-LoRA-30B-HF]](https://huggingface.co/TheBloke/gpt4-alpaca-lora-30b-HF)
18
 
19
  It was then quantized to 4bit, groupsize 128g, using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
20
 
 
14
 
15
  This is a 4-bit GPTQ version of the [Chansung GPT4 Alpaca 30B LoRA model](https://huggingface.co/chansung/gpt4-alpaca-lora-30b).
16
 
17
+ It was created by merging the LoRA provided in the above repo with the original Llama 30B model, producing unquantised model [GPT4-Alpaca-LoRA-30B-HF](https://huggingface.co/TheBloke/gpt4-alpaca-lora-30b-HF)
18
 
19
  It was then quantized to 4bit, groupsize 128g, using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
20