TheBloke commited on
Commit
143721e
1 Parent(s): 6c7a9b3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -16,7 +16,7 @@ This is a 4-bit GGML version of the [Chansung GPT4 Alpaca 30B LoRA model](https:
16
 
17
  It was created by merging the LoRA provided in the above repo with the original Llama 30B model, producing unquantised model [GPT4-Alpaca-LoRA-30B-HF](https://huggingface.co/TheBloke/gpt4-alpaca-lora-30b-HF)
18
 
19
- The files in this repo were then quantized to 4bit for use with [llama.cpp](https://github.com/ggerganov/llama.cpp) using the new 4bit quantisation methods being worked on in [PR #896](https://github.com/ggerganov/llama.cpp/pull/896).
20
 
21
  ## Provided files
22
  | Name | Quant method | Bits | Size | RAM required | Use case |
 
16
 
17
  It was created by merging the LoRA provided in the above repo with the original Llama 30B model, producing unquantised model [GPT4-Alpaca-LoRA-30B-HF](https://huggingface.co/TheBloke/gpt4-alpaca-lora-30b-HF)
18
 
19
+ The files in this repo were then quantized to 4bit and 5bit for use with [llama.cpp](https://github.com/ggerganov/llama.cpp).
20
 
21
  ## Provided files
22
  | Name | Quant method | Bits | Size | RAM required | Use case |