tsumeone commited on
Commit
9fd74ab
·
1 Parent(s): b8fa3e9

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -0
README.md ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ Quantized version of this: https://huggingface.co/TheBloke/stable-vicuna-13B-HF
2
+
3
+ Big thank you to TheBloke for uploading the HF version above. Unfortunately, his GPTQ quant doesn't run on 0cc4m's fork of KAI/GPTQ so I am uploading one that does.
4
+
5
+ GPTQ quantization using https://github.com/0cc4m/GPTQ-for-LLaMa for compatibility with 0cc4m's fork of KoboldAI.
6
+
7
+ Command used to quantize:
8
+ ```python llama.py c:\stable-vicuna-13B-HF c4 --wbits 4 --true-sequential --groupsize 128 --save_safetensors 4bit-128g.safetensors```