elinas commited on
Commit
6dd7d8a
1 Parent(s): 9d2cbf0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -1
README.md CHANGED
@@ -9,11 +9,26 @@ https://github.com/qwopqwop200/GPTQ-for-LLaMa
9
 
10
  LoRA credit to https://huggingface.co/baseten/alpaca-30b
11
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  # Usage
13
  1. Run manually through GPTQ
14
  2. (More setup but better UI) - Use the [text-generation-webui](https://github.com/oobabooga/text-generation-webui/wiki/LLaMA-model#4-bit-mode). Make sure to follow the installation steps first [here](https://github.com/oobabooga/text-generation-webui#installation) before adding GPTQ support.
15
 
16
- **Note that a recent code change in GPTQ broke functionality for GPTQ in general, so please follow [these instructions](https://huggingface.co/elinas/alpaca-30b-lora-int4/discussions/2#641a38d5f1ad1c1173d8f192) to fix the issue!**
17
 
18
  Since this is instruction tuned, for best results, use the following format for inference:
19
  ```
 
9
 
10
  LoRA credit to https://huggingface.co/baseten/alpaca-30b
11
 
12
+ # Update 2023-03-27
13
+ New weights have been added. The old .pt version is no longer supported and has been replaced by a 128 groupsize safetensors file. Update to the latest GPTQ to use it.
14
+
15
+ **alpaca-30b-4bit-128g.safetensors**
16
+
17
+ Evals
18
+ -----
19
+ **c4-new** -
20
+ coming soon
21
+
22
+ **ptb-new** -
23
+ coming soon
24
+
25
+ **wikitext2** -
26
+ coming soon
27
+
28
  # Usage
29
  1. Run manually through GPTQ
30
  2. (More setup but better UI) - Use the [text-generation-webui](https://github.com/oobabooga/text-generation-webui/wiki/LLaMA-model#4-bit-mode). Make sure to follow the installation steps first [here](https://github.com/oobabooga/text-generation-webui#installation) before adding GPTQ support.
31
 
 
32
 
33
  Since this is instruction tuned, for best results, use the following format for inference:
34
  ```