TheBloke commited on
Commit
18df7ce
1 Parent(s): 00266b5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +80 -0
README.md CHANGED
@@ -1,3 +1,83 @@
1
  ---
2
  license: other
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: other
3
+ inference: false
4
  ---
5
+
6
+ # WizardLM: An Instruction-following LLM Using Evol-Instruct
7
+
8
+ These files are the result of merging the [delta weights](https://huggingface.co/victor123/WizardLM) with the original Llama7B model.
9
+
10
+ The code for merging is provided in the [WizardLM official Github repo](https://github.com/nlpxucan/WizardLM).
11
+
12
+ ## WizardLM-7B 4bit GPTQ
13
+
14
+ This repo contains 4bit GPTQ models, quantised using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
15
+
16
+ ## GIBBERISH OUTPUT IN `text-generation-webui`?
17
+
18
+ Please read the Provided Files section below. You should use `wizardLM-7B-GPTQ-4bit-128g.no-act-order.safetensors` unless you are able to use the latest GPTQ-for-LLaMa code.
19
+
20
+ If you're using a text-generation-webui one click installer, you MUST use `wizardLM-7B-GPTQ-4bit-128g.no-act-order.safetensors`.
21
+
22
+ ## Provided files
23
+
24
+ Two files are provided. **The second file will not work unless you use a recent version of the Triton branch of GPTQ-for-LLaMa**
25
+
26
+ Specifically, the second file uses `--act-order` for maximum quantisation quality and will not work with oobabooga's fork of GPTQ-for-LLaMa. Therefore at this time it will also not work with `text-generation-webui` one-click installers.
27
+
28
+ Unless you are able to use the latest Triton GPTQ-for-LLaMa code, please use `medalpaca-13B-GPTQ-4bit-128g.no-act-order.safetensors`
29
+
30
+ * `wizardLM-7B-GPTQ-4bit-128g.no-act-order.safetensors`
31
+ * Works with all versions of GPTQ-for-LLaMa code, both Triton and CUDA branches
32
+ * Works with text-generation-webui one-click-installers
33
+ * Works on Windows
34
+ * Parameters: Groupsize = 128g. No act-order.
35
+ * Command used to create the GPTQ:
36
+ ```
37
+ CUDA_VISIBLE_DEVICES=0 python3 llama.py wizardLM-7B-HF c4 --wbits 4 --true-sequential --groupsize 128 --save_safetensors wizardLM-7B-GPTQ-4bit-128g.no-act-order.safetensors
38
+ ```
39
+ * `wizardLM-7B-GPTQ-4bit-128g.act-order.safetensors`
40
+ * Only works with recent GPTQ-for-LLaMa code
41
+ * **Does not** work with text-generation-webui one-click-installers
42
+ * Parameters: Groupsize = 128g. act-order.
43
+ * Offers highest quality quantisation, but requires recent Triton GPTQ-for-LLaMa code and more VRAM
44
+ * Command used to create the GPTQ:
45
+ ```
46
+ CUDA_VISIBLE_DEVICES=0 python3 llama.py wizardLM-7B-HF c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save_safetensors wizardLM-7B-GPTQ-4bit-128g.act-order.safetensors
47
+ ```
48
+
49
+ ## How to run in `text-generation-webui`
50
+
51
+ File `wizardLM-7B-GPTQ-4bit-128g.no-act-order.safetensors` can be loaded the same as any other GPTQ file, without requiring any updates to [oobaboogas text-generation-webui](https://github.com/oobabooga/text-generation-webui).
52
+
53
+ [Instructions on using GPTQ 4bit files in text-generation-webui are here](https://github.com/oobabooga/text-generation-webui/wiki/GPTQ-models-\(4-bit-mode\)).
54
+
55
+ The other two `safetensors` model files were created using `--act-order` to give the maximum possible quantisation quality, but this means it requires that the latest Triton GPTQ-for-LLaMa is used inside the UI.
56
+
57
+ If you want to use the act-order `safetensors` files and need to update the Triton branch of GPTQ-for-LLaMa, here are the commands I used to clone the Triton branch of GPTQ-for-LLaMa, clone text-generation-webui, and install GPTQ into the UI:
58
+ ```
59
+ # Clone text-generation-webui, if you don't already have it
60
+ git clone https://github.com/oobabooga/text-generation-webui
61
+ # Make a repositories directory
62
+ mkdir text-generation-webui/repositories
63
+ cd text-generation-webui/repositories
64
+ # Clone the latest GPTQ-for-LLaMa code inside text-generation-webui
65
+ git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa
66
+ ```
67
+
68
+ Then install this model into `text-generation-webui/models` and launch the UI as follows:
69
+ ```
70
+ cd text-generation-webui
71
+ python server.py --model medalpaca-13B-GPTQ-4bit --wbits 4 --groupsize 128 --model_type Llama # add any other command line args you want
72
+ ```
73
+
74
+ The above commands assume you have installed all dependencies for GPTQ-for-LLaMa and text-generation-webui. Please see their respective repositories for further information.
75
+
76
+ If you can't update GPTQ-for-LLaMa or don't want to, you can use `wizardLM-7B-GPTQ-4bit-128g.no-act-order.safetensors` as mentioned above, which should work without any upgrades to text-generation-webui.
77
+
78
+ # Original model info
79
+
80
+ Overview of Evol-Instruct
81
+ Evol-Instruct is a novel method using LLMs instead of humans to automatically mass-produce open-domain instructions of various difficulty levels and skills range, to improve the performance of LLMs.
82
+
83
+ ![info](https://github.com/nlpxucan/WizardLM/raw/main/imgs/git_running.png)