TheBloke commited on
Commit
6a4336e
1 Parent(s): fe82c9d

Warning re GPTQ files not working

Browse files
Files changed (1) hide show
  1. README.md +4 -0
README.md CHANGED
@@ -3,6 +3,10 @@ This repo contains the weights of the Koala 7B model produced at Berkeley. It is
3
 
4
  This version has then been quantized to 4bit using https://github.com/qwopqwop200/GPTQ-for-LLaMa
5
 
 
 
 
 
6
  Quantization command was:
7
  ```
8
  python3 llama.py /content/koala-7B-HF c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save /content/koala-7B-4bit-128g.pt
 
3
 
4
  This version has then been quantized to 4bit using https://github.com/qwopqwop200/GPTQ-for-LLaMa
5
 
6
+ ### WARNING: At the present time the GPTQ files uploaded here are producing garbage output. It is not recommended to use them.
7
+
8
+ I'm working on diagnosing this issue and producing working files.
9
+
10
  Quantization command was:
11
  ```
12
  python3 llama.py /content/koala-7B-HF c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save /content/koala-7B-4bit-128g.pt