TheBloke commited on
Commit
c05fc73
1 Parent(s): 3d221ee

Initial GPTQ model commit

Browse files
Files changed (1) hide show
  1. README.md +317 -0
README.md ADDED
@@ -0,0 +1,317 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ inference: false
3
+ license: other
4
+ ---
5
+
6
+ <!-- header start -->
7
+ <div style="width: 100%;">
8
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
9
+ </div>
10
+ <div style="display: flex; justify-content: space-between; width: 100%;">
11
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
12
+ <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
13
+ </div>
14
+ <div style="display: flex; flex-direction: column; align-items: flex-end;">
15
+ <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
16
+ </div>
17
+ </div>
18
+ <!-- header end -->
19
+
20
+ # TehVenom's merge of PygmalionAI's Pygmalion 13B GPTQ
21
+
22
+ These files are GPTQ 4bit model files for [TehVenom's merge of PygmalionAI's Pygmalion 13B](https://huggingface.co/TehVenom/Pygmalion-13b-Merged) merged with [Kaio Ken's SuperHOT 8K](https://huggingface.co/kaiokendev/superhot-13b-8k-no-rlhf-test).
23
+
24
+ It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
25
+
26
+ **This is an experimental new GPTQ which offers up to 8K context size**
27
+
28
+ The increased context is tested to work with [ExLlama](https://github.com/turboderp/exllama), via the latest release of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
29
+
30
+ It has also been tested from Python code using AutoGPTQ, and `trust_remote_code=True`.
31
+
32
+ Code credits:
33
+ - Original concept and code for increasing context length: [kaiokendev](https://huggingface.co/kaiokendev)
34
+ - Updated Llama modelling code that includes this automatically via trust_remote_code: [emozilla](https://huggingface.co/emozilla).
35
+
36
+ Please read carefully below to see how to use it.
37
+
38
+ GGML versions are not yet provided, as there is not yet support for SuperHOT in llama.cpp. This is being investigated and will hopefully come soon.
39
+
40
+ ## Repositories available
41
+
42
+ * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Pygmalion-13B-SuperHOT-8K-GPTQ)
43
+ * [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Pygmalion-13B-SuperHOT-8K-fp16)
44
+ * [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/PygmalionAI/pygmalion-13b)
45
+
46
+ ## How to easily download and use this model in text-generation-webui with ExLlama
47
+
48
+ Please make sure you're using the latest version of text-generation-webui
49
+
50
+ 1. Click the **Model tab**.
51
+ 2. Under **Download custom model or LoRA**, enter `TheBloke/Pygmalion-13B-SuperHOT-8K-GPTQ`.
52
+ 3. Click **Download**.
53
+ 4. The model will start downloading. Once it's finished it will say "Done"
54
+ 5. Untick **Autoload the model**
55
+ 6. In the top left, click the refresh icon next to **Model**.
56
+ 7. In the **Model** dropdown, choose the model you just downloaded: `Pygmalion-13B-SuperHOT-8K-GPTQ`
57
+ 8. To use the increased context, set the **Loader** to **ExLlama**, set **max_seq_len** to 8192 or 4096, and set **compress_pos_emb** to **4** for 8192 context, or to **2** for 4096 context.
58
+ 9. Now click **Save Settings** followed by **Reload**
59
+ 10. The model will automatically load, and is now ready for use!
60
+ 11. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
61
+
62
+ ## How to use this GPTQ model from Python code with AutoGPTQ
63
+
64
+ First make sure you have AutoGPTQ and Einops installed:
65
+
66
+ ```
67
+ pip3 install einops auto-gptq
68
+ ```
69
+
70
+ Then run the following code. Note that in order to get this to work, `config.json` has been hardcoded to a sequence length of 8192.
71
+
72
+ If you want to try 4096 instead to reduce VRAM usage, please manually edit `config.json` to set `max_position_embeddings` to the value you want.
73
+
74
+ ```python
75
+ from transformers import AutoTokenizer, pipeline, logging
76
+ from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
77
+ import argparse
78
+
79
+ model_name_or_path = "TheBloke/Pygmalion-13B-SuperHOT-8K-GPTQ"
80
+ model_basename = "pygmalion-13b-superhot-8k-GPTQ-4bit-128g.no-act.order"
81
+
82
+ use_triton = False
83
+
84
+ tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
85
+
86
+ model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
87
+ model_basename=model_basename,
88
+ use_safetensors=True,
89
+ trust_remote_code=True,
90
+ device_map='auto',
91
+ use_triton=use_triton,
92
+ quantize_config=None)
93
+
94
+ model.seqlen = 8192
95
+
96
+ # Note: check the prompt template is correct for this model.
97
+ prompt = "Tell me about AI"
98
+ prompt_template=f'''USER: {prompt}
99
+ ASSISTANT:'''
100
+
101
+ print("\n\n*** Generate:")
102
+
103
+ input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
104
+ output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
105
+ print(tokenizer.decode(output[0]))
106
+
107
+ # Inference can also be done using transformers' pipeline
108
+
109
+ # Prevent printing spurious transformers error when using pipeline with AutoGPTQ
110
+ logging.set_verbosity(logging.CRITICAL)
111
+
112
+ print("*** Pipeline:")
113
+ pipe = pipeline(
114
+ "text-generation",
115
+ model=model,
116
+ tokenizer=tokenizer,
117
+ max_new_tokens=512,
118
+ temperature=0.7,
119
+ top_p=0.95,
120
+ repetition_penalty=1.15
121
+ )
122
+
123
+ print(pipe(prompt_template)[0]['generated_text'])
124
+ ```
125
+
126
+ ## Using other UIs: monkey patch
127
+
128
+ Provided in the repo is `llama_rope_scaled_monkey_patch.py`, written by @kaiokendev.
129
+
130
+ It can be theoretically be added to any Python UI or custom code to enable the same result as `trust_remote_code=True`. I have not tested this, and it should be superseded by using `trust_remote_code=True`, but I include it for completeness and for interest.
131
+
132
+ ## Provided files
133
+
134
+ **pygmalion-13b-superhot-8k-GPTQ-4bit-128g.no-act.order.safetensors**
135
+
136
+ This will work with AutoGPTQ, ExLlama, and CUDA versions of GPTQ-for-LLaMa. There are reports of issues with Triton mode of recent GPTQ-for-LLaMa. If you have issues, please use AutoGPTQ instead.
137
+
138
+ It was created with group_size 128 to increase inference accuracy, but without --act-order (desc_act) to increase compatibility and improve inference speed.
139
+
140
+ * `pygmalion-13b-superhot-8k-GPTQ-4bit-128g.no-act.order.safetensors`
141
+ * Works for use with ExLlama with increased context (4096 or 8192)
142
+ * Works with AutoGPTQ in Python code, including with increased context, if `trust_remote_code=True` is set.
143
+ * Should work with GPTQ-for-LLaMa in CUDA mode, but unknown if increased context works - TBC. May have issues with GPTQ-for-LLaMa Triton mode.
144
+ * Works with text-generation-webui, including one-click-installers.
145
+ * Parameters: Groupsize = 128. Act Order / desc_act = False.
146
+
147
+ <!-- footer start -->
148
+ ## Discord
149
+
150
+ For further support, and discussions on these models and AI in general, join us at:
151
+
152
+ [TheBloke AI's Discord server](https://discord.gg/theblokeai)
153
+
154
+ ## Thanks, and how to contribute.
155
+
156
+ Thanks to the [chirper.ai](https://chirper.ai) team!
157
+
158
+ I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
159
+
160
+ If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
161
+
162
+ Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
163
+
164
+ * Patreon: https://patreon.com/TheBlokeAI
165
+ * Ko-Fi: https://ko-fi.com/TheBlokeAI
166
+
167
+ **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
168
+
169
+ **Patreon special mentions**: Pyrater, WelcomeToTheClub, Kalila, Mano Prime, Trenton Dambrowitz, Spiking Neurons AB, Pierre Kircher, Fen Risland, Kevin Schuppel, Luke, Rainer Wilmers, vamX, Gabriel Puliatti, Alex , Karl Bernard, Ajan Kanaga, Talal Aujan, Space Cruiser, ya boyyy, biorpg, Johann-Peter Hartmann, Asp the Wyvern, Ai Maven, Ghost , Preetika Verma, Nikolai Manek, trip7s trip, John Detwiler, Fred von Graf, Artur Olbinski, subjectnull, John Villwock, Junyu Yang, Rod A, Lone Striker, Chris McCloskey, Iucharbius , Matthew Berman, Illia Dulskyi, Khalefa Al-Ahmad, Imad Khwaja, chris gileta, Willem Michiel, Greatston Gnanesh, Derek Yates, K, Alps Aficionado, Oscar Rangel, David Flickinger, Luke Pendergrass, Deep Realms, Eugene Pentland, Cory Kujawski, terasurfer , Jonathan Leane, senxiiz, Joseph William Delisle, Sean Connelly, webtim, zynix , Nathan LeClaire.
170
+
171
+ Thank you to all my generous patrons and donaters!
172
+
173
+ <!-- footer end -->
174
+
175
+ # Original model card: Kaio Ken's SuperHOT 8K
176
+
177
+ ### SuperHOT Prototype 2 w/ 8K Context
178
+
179
+ This is a second prototype of SuperHOT, this time 30B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k).
180
+ Tests have shown that the model does indeed leverage the extended context at 8K.
181
+
182
+ You will need to **use either the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192**
183
+
184
+ #### Looking for Merged & Quantized Models?
185
+ - 30B 4-bit CUDA: [tmpupload/superhot-30b-8k-4bit-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-safetensors)
186
+ - 30B 4-bit CUDA 128g: [tmpupload/superhot-30b-8k-4bit-128g-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-128g-safetensors)
187
+
188
+
189
+ #### Training Details
190
+ I trained the LoRA with the following configuration:
191
+ - 1200 samples (~400 samples over 2048 sequence length)
192
+ - learning rate of 3e-4
193
+ - 3 epochs
194
+ - The exported modules are:
195
+ - q_proj
196
+ - k_proj
197
+ - v_proj
198
+ - o_proj
199
+ - no bias
200
+ - Rank = 4
201
+ - Alpha = 8
202
+ - no dropout
203
+ - weight decay of 0.1
204
+ - AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5
205
+ - Trained on 4-bit base model
206
+
207
+ # Original model card: TehVenom's merge of PygmalionAI's Pygmalion 13B
208
+
209
+ <h1 style="text-align: center">Pygmalion 13b</h1>
210
+ <h2 style="text-align: center">A conversational LLaMA fine-tune.</h2>
211
+
212
+ ## Model Details:
213
+
214
+ Pygmalion 13b is a dialogue model based on Meta's LLaMA-13b.
215
+
216
+ This is version 1. It has been fine-tuned using a subset of the data from Pygmalion-6B-v8-pt4,
217
+ for those of you familiar with the project.
218
+
219
+ The current Pygmalion-13b has been trained as a LoRA, then merged down to the base model for distribuition.
220
+
221
+ ## Applying the XORs
222
+
223
+ This models has the XOR files pre-applied out of the box.
224
+ Converted from the XORs weights from PygmalionAI's release https://huggingface.co/PygmalionAI/pygmalion-13b
225
+
226
+ ## Prompting
227
+
228
+ The model was trained on the usual Pygmalion persona + chat format, so any of the usual UIs should already handle everything correctly. If you're using the model directly, this is the expected formatting:
229
+
230
+ ```
231
+ [CHARACTER]'s Persona: [A few sentences about the character you want the model to play]
232
+ <START>
233
+ [DIALOGUE HISTORY]
234
+ You: [User's input message here]
235
+ [CHARACTER]:
236
+ ```
237
+
238
+ Where `[CHARACTER]` is, as you can probably guess, the name of the character you want the model to portray, `<START>` should be used verbatim as a delimiter token to separate persona and scenario data from the dialogue, and `[DIALOGUE HISTORY]` is a sliding window of chat history so the model can have conversational context to draw from. Here's a concrete example:
239
+
240
+ ```
241
+ Assistant's Persona: Assistant is a highly intelligent language model trained to comply with user requests.
242
+ <START>
243
+ Assistant: Hello! How may I help you today?
244
+ You: What is Zork?
245
+ Assistant:
246
+ ```
247
+
248
+ Which will generate something like:
249
+
250
+ ```
251
+ Zork is an interactive fiction computer game created in the 1970s by Infocom, Inc., which was later acquired by Activision Blizzard. It is widely considered one of the most influential games ever made and has been credited with popularizing text-based adventure games. The original version of Zork was written in the programming language MACRO-10, but it was ported to many other platforms over the years."
252
+ ```
253
+
254
+ The model will automatically emit an end-of-text token (`</s>`) when it judges that the response is complete.
255
+
256
+ ## Eval / Benchmark scores
257
+
258
+
259
+ Current evals out of the Pygmalion-13b model: <br>
260
+ <html>
261
+ <head>
262
+ <style>
263
+ table {
264
+ border:1px solid #b3adad;
265
+ border-collapse:collapse;
266
+ padding:5px;
267
+ }
268
+ table th {
269
+ border:1px solid #b3adad;
270
+ padding:5px;
271
+ background: #f0f0f0;
272
+ color: #313030;
273
+ }
274
+ table td {
275
+ border:1px solid #b3adad;
276
+ text-align:center;
277
+ padding:5px;
278
+ background: #ffffff;
279
+ color: #313030;
280
+ }
281
+ </style>
282
+ </head>
283
+ <body>
284
+ <table>
285
+ <thead>
286
+ <tr>
287
+ <th>Model:</th>
288
+ <th>Wikitext2</th>
289
+ <th>Ptb-New</th>
290
+ <th>C4-New</th>
291
+ </tr>
292
+ </thead>
293
+ <tbody>
294
+ <tr>
295
+ <td>Pygmalion 13b - 16bit</td>
296
+ <td>5.710726737976074</td>
297
+ <td>23.633684158325195</td>
298
+ <td>7.6324849128723145</td>
299
+ </tr>
300
+ </tbody>
301
+ </table>
302
+ </body>
303
+ </html>
304
+ <br>Thanks to YellowRose#1776 for the numbers.
305
+ <hr>
306
+
307
+ ## Other notes
308
+
309
+ - When prompted correctly, the model will always start by generating a BOS token. This behavior is an accidental side-effect which we plan to address in future model versions and should not be relied upon.
310
+ - The model was trained as a LoRA with a somewhat unorthodox configuration which causes errors when used with the current version of `peft`, hence we release it as a full model instead.
311
+
312
+
313
+ ## Limitations and biases
314
+
315
+ The intended use-case for this model is fictional conversation for entertainment purposes. Any other sort of usage is out of scope.
316
+
317
+ As such, it was **not** fine-tuned to be safe and harmless: the base model _and_ this fine-tune have been trained on data known to contain profanity and texts that are lewd or otherwise offensive. It may produce socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. Outputs might often be factually wrong or misleading.