TheBloke commited on
Commit
7890228
·
1 Parent(s): c2be8f4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -63
README.md CHANGED
@@ -23,6 +23,14 @@ These files are GPTQ 4bit model files for [Panchovix's merge of WizardLM 33B V1.
23
 
24
  It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
25
 
 
 
 
 
 
 
 
 
26
  ## Repositories available
27
 
28
  * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/WizardLM-33B-V1.0-Uncensored-SuperHOT-8KGPTQ)
@@ -34,73 +42,20 @@ It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com
34
  Please make sure you're using the latest version of text-generation-webui
35
 
36
  1. Click the **Model tab**.
37
- 2. Under **Download custom model or LoRA**, enter `TheBloke/WizardLM-33B-V1.0-Uncensored-SuperHOT-8KGPTQ`.
38
  3. Click **Download**.
39
  4. The model will start downloading. Once it's finished it will say "Done"
40
- 5. In the top left, click the refresh icon next to **Model**.
41
- 6. In the **Model** dropdown, choose the model you just downloaded: `WizardLM-33B-V1.0-Uncensored-SuperHOT-8KGPTQ`
42
- 7. The model will automatically load, and is now ready for use!
43
- 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
44
- * Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
45
- 9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
46
-
47
- ## How to use this GPTQ model from Python code
48
-
49
- First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed:
50
-
51
- `pip install auto-gptq`
52
-
53
- Then try the following example code:
54
-
55
- ```python
56
- from transformers import AutoTokenizer, pipeline, logging
57
- from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
58
- import argparse
59
-
60
- model_name_or_path = "TheBloke/WizardLM-33B-V1.0-Uncensored-SuperHOT-8KGPTQ"
61
- model_basename = "wizardlm-33b-v1.0-uncensored-superhot-8k-GPTQ-4bit--1g.act.order"
62
-
63
- use_triton = False
64
-
65
- tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
66
-
67
- model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
68
- model_basename=model_basename,
69
- use_safetensors=True,
70
- trust_remote_code=False,
71
- device="cuda:0",
72
- use_triton=use_triton,
73
- quantize_config=None)
74
-
75
- # Note: check the prompt template is correct for this model.
76
- prompt = "Tell me about AI"
77
- prompt_template=f'''USER: {prompt}
78
- ASSISTANT:'''
79
-
80
- print("\n\n*** Generate:")
81
-
82
- input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
83
- output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
84
- print(tokenizer.decode(output[0]))
85
-
86
- # Inference can also be done using transformers' pipeline
87
-
88
- # Prevent printing spurious transformers error when using pipeline with AutoGPTQ
89
- logging.set_verbosity(logging.CRITICAL)
90
 
91
- print("*** Pipeline:")
92
- pipe = pipeline(
93
- "text-generation",
94
- model=model,
95
- tokenizer=tokenizer,
96
- max_new_tokens=512,
97
- temperature=0.7,
98
- top_p=0.95,
99
- repetition_penalty=1.15
100
- )
101
 
102
- print(pipe(prompt_template)[0]['generated_text'])
103
- ```
104
 
105
  ## Provided files
106
 
 
23
 
24
  It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
25
 
26
+ **This is an experimental new GPTQ which offers up to 8K context size**
27
+
28
+ The increased context is currently only tested to work with [ExLlama](https://github.com/turboderp/exllama), via the latest release of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
29
+
30
+ Please read carefully below to see how to use it.
31
+
32
+ **NOTE**: Using the full 8K context will exceed 24GB VRAM.
33
+
34
  ## Repositories available
35
 
36
  * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/WizardLM-33B-V1.0-Uncensored-SuperHOT-8KGPTQ)
 
42
  Please make sure you're using the latest version of text-generation-webui
43
 
44
  1. Click the **Model tab**.
45
+ 2. Under **Download custom model or LoRA**, enter `TheBloke/WizardLM-33B-V1.0-Uncensored-SuperHOT-8K-GPTQ`.
46
  3. Click **Download**.
47
  4. The model will start downloading. Once it's finished it will say "Done"
48
+ 5. Untick **Autoload the model**
49
+ 6. In the top left, click the refresh icon next to **Model**.
50
+ 7. In the **Model** dropdown, choose the model you just downloaded: `WizardLM-33B-V1.0-Uncensored-SuperHOT-8K-GPTQ`
51
+ 8. To use the increased context, set the **Loader** to **ExLlama**, set **max_seq_len** to 8192 or 4096, and set **compress_pos_emb** to **4** for 8192 context, or to **2** for 4096 context.
52
+ 9. Now click **Save Settings** followed by **Reload**
53
+ 10. The model will automatically load, and is now ready for use!
54
+ 11. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
55
 
56
+ ## How to use this GPTQ model from Python code - TBC
 
 
 
 
 
 
 
 
 
57
 
58
+ Using this model with increased context from Python code is currently untested, so this section is removed for now.
 
59
 
60
  ## Provided files
61