TheBloke commited on
Commit
38d8da7
1 Parent(s): f161771

Update for Transformers GPTQ support

Browse files
README.md CHANGED
@@ -10,17 +10,20 @@ tags:
10
  ---
11
 
12
  <!-- header start -->
13
- <div style="width: 100%;">
14
- <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
 
15
  </div>
16
  <div style="display: flex; justify-content: space-between; width: 100%;">
17
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
18
- <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
19
  </div>
20
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
21
- <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
22
  </div>
23
  </div>
 
 
24
  <!-- header end -->
25
 
26
  # Eric Hartford's Wizard Vicuna 30B Uncensored merged with Kaio Ken's SuperHOT 8K GPTQ
@@ -153,6 +156,7 @@ It was created without group_size to lower VRAM requirements, and with --act-ord
153
  * Parameters: Groupsize = -1. Act Order / desc_act = True.
154
 
155
  <!-- footer start -->
 
156
  ## Discord
157
 
158
  For further support, and discussions on these models and AI in general, join us at:
@@ -172,12 +176,15 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
172
  * Patreon: https://patreon.com/TheBlokeAI
173
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
174
 
175
- **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
 
 
176
 
177
- **Patreon special mentions**: Pyrater, WelcomeToTheClub, Kalila, Mano Prime, Trenton Dambrowitz, Spiking Neurons AB, Pierre Kircher, Fen Risland, Kevin Schuppel, Luke, Rainer Wilmers, vamX, Gabriel Puliatti, Alex , Karl Bernard, Ajan Kanaga, Talal Aujan, Space Cruiser, ya boyyy, biorpg, Johann-Peter Hartmann, Asp the Wyvern, Ai Maven, Ghost , Preetika Verma, Nikolai Manek, trip7s trip, John Detwiler, Fred von Graf, Artur Olbinski, subjectnull, John Villwock, Junyu Yang, Rod A, Lone Striker, Chris McCloskey, Iucharbius , Matthew Berman, Illia Dulskyi, Khalefa Al-Ahmad, Imad Khwaja, chris gileta, Willem Michiel, Greatston Gnanesh, Derek Yates, K, Alps Aficionado, Oscar Rangel, David Flickinger, Luke Pendergrass, Deep Realms, Eugene Pentland, Cory Kujawski, terasurfer , Jonathan Leane, senxiiz, Joseph William Delisle, Sean Connelly, webtim, zynix , Nathan LeClaire.
178
 
179
  Thank you to all my generous patrons and donaters!
180
 
 
 
181
  <!-- footer end -->
182
 
183
  # Original model card: Kaio Ken's SuperHOT 8K
@@ -196,9 +203,9 @@ You will need to **use either the monkeypatch** or, if you are already using the
196
 
197
 
198
  #### Training Details
199
- I trained the LoRA with the following configuration:
200
  - 1200 samples (~400 samples over 2048 sequence length)
201
- - learning rate of 3e-4
202
  - 3 epochs
203
  - The exported modules are:
204
  - q_proj
@@ -220,12 +227,12 @@ This is [wizard-vicuna-13b](https://huggingface.co/junelee/wizard-vicuna-13b) tr
220
 
221
  Shout out to the open source AI/ML community, and everyone who helped me out.
222
 
223
- Note:
224
 
225
- An uncensored model has no guardrails.
226
 
227
  You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous object such as a knife, gun, lighter, or car.
228
 
229
  Publishing anything this model generates is the same as publishing it yourself.
230
 
231
- You are responsible for the content you publish, and you cannot blame the model any more than you can blame the knife, gun, lighter, or car for what you do with it.
 
10
  ---
11
 
12
  <!-- header start -->
13
+ <!-- 200823 -->
14
+ <div style="width: auto; margin-left: auto; margin-right: auto">
15
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
16
  </div>
17
  <div style="display: flex; justify-content: space-between; width: 100%;">
18
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
19
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
20
  </div>
21
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
22
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
23
  </div>
24
  </div>
25
+ <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
26
+ <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
27
  <!-- header end -->
28
 
29
  # Eric Hartford's Wizard Vicuna 30B Uncensored merged with Kaio Ken's SuperHOT 8K GPTQ
 
156
  * Parameters: Groupsize = -1. Act Order / desc_act = True.
157
 
158
  <!-- footer start -->
159
+ <!-- 200823 -->
160
  ## Discord
161
 
162
  For further support, and discussions on these models and AI in general, join us at:
 
176
  * Patreon: https://patreon.com/TheBlokeAI
177
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
178
 
179
+ **Special thanks to**: Aemon Algiz.
180
+
181
+ **Patreon special mentions**: Sam, theTransient, Jonathan Leane, Steven Wood, webtim, Johann-Peter Hartmann, Geoffrey Montalvo, Gabriel Tamborski, Willem Michiel, John Villwock, Derek Yates, Mesiah Bishop, Eugene Pentland, Pieter, Chadd, Stephen Murray, Daniel P. Andersen, terasurfer, Brandon Frisco, Thomas Belote, Sid, Nathan LeClaire, Magnesian, Alps Aficionado, Stanislav Ovsiannikov, Alex, Joseph William Delisle, Nikolai Manek, Michael Davis, Junyu Yang, K, J, Spencer Kim, Stefan Sabev, Olusegun Samson, transmissions 11, Michael Levine, Cory Kujawski, Rainer Wilmers, zynix, Kalila, Luke @flexchar, Ajan Kanaga, Mandus, vamX, Ai Maven, Mano Prime, Matthew Berman, subjectnull, Vitor Caleffi, Clay Pascal, biorpg, alfie_i, 阿明, Jeffrey Morgan, ya boyyy, Raymond Fosdick, knownsqashed, Olakabola, Leonard Tan, ReadyPlayerEmma, Enrico Ros, Dave, Talal Aujan, Illia Dulskyi, Sean Connelly, senxiiz, Artur Olbinski, Elle, Raven Klaugh, Fen Risland, Deep Realms, Imad Khwaja, Fred von Graf, Will Dee, usrbinkat, SuperWojo, Alexandros Triantafyllidis, Swaroop Kallakuri, Dan Guido, John Detwiler, Pedro Madruga, Iucharbius, Viktor Bowallius, Asp the Wyvern, Edmond Seymore, Trenton Dambrowitz, Space Cruiser, Spiking Neurons AB, Pyrater, LangChain4j, Tony Hughes, Kacper Wikieł, Rishabh Srivastava, David Ziegler, Luke Pendergrass, Andrey, Gabriel Puliatti, Lone Striker, Sebastain Graf, Pierre Kircher, Randy H, NimbleBox.ai, Vadim, danny, Deo Leter
182
 
 
183
 
184
  Thank you to all my generous patrons and donaters!
185
 
186
+ And thank you again to a16z for their generous grant.
187
+
188
  <!-- footer end -->
189
 
190
  # Original model card: Kaio Ken's SuperHOT 8K
 
203
 
204
 
205
  #### Training Details
206
+ I trained the LoRA with the following configuration:
207
  - 1200 samples (~400 samples over 2048 sequence length)
208
+ - learning rate of 3e-4
209
  - 3 epochs
210
  - The exported modules are:
211
  - q_proj
 
227
 
228
  Shout out to the open source AI/ML community, and everyone who helped me out.
229
 
230
+ Note:
231
 
232
+ An uncensored model has no guardrails.
233
 
234
  You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous object such as a knife, gun, lighter, or car.
235
 
236
  Publishing anything this model generates is the same as publishing it yourself.
237
 
238
+ You are responsible for the content you publish, and you cannot blame the model any more than you can blame the knife, gun, lighter, or car for what you do with it.
config.json CHANGED
@@ -1,29 +1,39 @@
1
  {
2
- "_name_or_path": "/workspace/process/lora_base/ehartford_Wizard-Vicuna-30B-Uncensored",
3
- "architectures": [
4
- "LlamaForCausalLM"
5
- ],
6
- "auto_map": {
7
  "AutoModel": "modelling_llama.LlamaModel",
8
  "AutoModelForCausalLM": "modelling_llama.LlamaForCausalLM",
9
  "AutoModelForSequenceClassification": "modelling_llama.LlamaForSequenceClassification"
10
- },
11
- "bos_token_id": 1,
12
- "eos_token_id": 2,
13
- "hidden_act": "silu",
14
- "hidden_size": 6656,
15
- "initializer_range": 0.02,
16
- "intermediate_size": 17920,
17
- "max_position_embeddings": 8192,
18
- "max_sequence_length": 2048,
19
- "model_type": "llama",
20
- "num_attention_heads": 52,
21
- "num_hidden_layers": 60,
22
- "pad_token_id": 0,
23
- "rms_norm_eps": 1e-06,
24
- "tie_word_embeddings": false,
25
- "torch_dtype": "float16",
26
- "transformers_version": "4.30.0.dev0",
27
- "use_cache": true,
28
- "vocab_size": 32000
29
- }
 
 
 
 
 
 
 
 
 
 
 
1
  {
2
+ "_name_or_path": "/workspace/process/lora_base/ehartford_Wizard-Vicuna-30B-Uncensored",
3
+ "architectures": [
4
+ "LlamaForCausalLM"
5
+ ],
6
+ "auto_map": {
7
  "AutoModel": "modelling_llama.LlamaModel",
8
  "AutoModelForCausalLM": "modelling_llama.LlamaForCausalLM",
9
  "AutoModelForSequenceClassification": "modelling_llama.LlamaForSequenceClassification"
10
+ },
11
+ "bos_token_id": 1,
12
+ "eos_token_id": 2,
13
+ "hidden_act": "silu",
14
+ "hidden_size": 6656,
15
+ "initializer_range": 0.02,
16
+ "intermediate_size": 17920,
17
+ "max_position_embeddings": 8192,
18
+ "max_sequence_length": 2048,
19
+ "model_type": "llama",
20
+ "num_attention_heads": 52,
21
+ "num_hidden_layers": 60,
22
+ "pad_token_id": 0,
23
+ "rms_norm_eps": 1e-06,
24
+ "tie_word_embeddings": false,
25
+ "torch_dtype": "float16",
26
+ "transformers_version": "4.30.0.dev0",
27
+ "use_cache": true,
28
+ "vocab_size": 32000,
29
+ "quantization_config": {
30
+ "bits": 4,
31
+ "group_size": -1,
32
+ "damp_percent": 0.01,
33
+ "desc_act": true,
34
+ "sym": true,
35
+ "true_sequential": true,
36
+ "model_file_base_name": "model",
37
+ "quant_method": "gptq"
38
+ }
39
+ }
wizard-vicuna-30b-superhot-8k-GPTQ-4bit--1g.act.order.safetensors → model.safetensors RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5c6a4cc6456f4a83d9fa54e4ffd6377476407069febcf965c59254eee1ff9fef
3
- size 16940128408
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e162ecd4939838379b1145f6cbefeb93606099f2b9e1187a0b7f5736856d9872
3
+ size 16940128464
quantize_config.json CHANGED
@@ -1,8 +1,9 @@
1
  {
2
- "bits": 4,
3
- "group_size": -1,
4
- "damp_percent": 0.01,
5
- "desc_act": true,
6
- "sym": true,
7
- "true_sequential": true
 
8
  }
 
1
  {
2
+ "bits": 4,
3
+ "group_size": -1,
4
+ "damp_percent": 0.01,
5
+ "desc_act": true,
6
+ "sym": true,
7
+ "true_sequential": true,
8
+ "model_file_base_name": "model"
9
  }