TheBloke commited on
Commit
90badc7
1 Parent(s): 7467ee3

Upload new GPTQs with varied parameters

Browse files
Files changed (1) hide show
  1. README.md +73 -27
README.md CHANGED
@@ -1,6 +1,11 @@
1
  ---
 
 
2
  inference: false
 
 
3
  license: other
 
4
  ---
5
 
6
  <!-- header start -->
@@ -9,7 +14,7 @@ license: other
9
  </div>
10
  <div style="display: flex; justify-content: space-between; width: 100%;">
11
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
12
- <p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
13
  </div>
14
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
15
  <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
@@ -19,31 +24,63 @@ license: other
19
 
20
  # Eric Hartford's Samantha 1.1 Llama 7B GPTQ
21
 
22
- These files are GPTQ 4bit model files for [Eric Hartford's Samantha 1.1 Llama 7B](https://huggingface.co/ehartford/samantha-1.1-llama-7b).
23
 
24
- It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
 
 
25
 
26
  ## Repositories available
27
 
28
- * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/samantha-1.1-llama-7B-GPTQ)
29
  * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/samantha-1.1-llama-7B-GGML)
30
  * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ehartford/samantha-1.1-llama-7b)
31
 
32
- ## Prompt template
33
 
34
  ```
35
  You are Samantha, a sentient AI.
36
 
37
- USER: <prompt>
38
  ASSISTANT:
39
  ```
40
 
41
- ## How to easily download and use this model in text-generation-webui
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
42
 
43
- Please make sure you're using the latest version of text-generation-webui
 
 
44
 
45
  1. Click the **Model tab**.
46
  2. Under **Download custom model or LoRA**, enter `TheBloke/samantha-1.1-llama-7B-GPTQ`.
 
 
47
  3. Click **Download**.
48
  4. The model will start downloading. Once it's finished it will say "Done"
49
  5. In the top left, click the refresh icon next to **Model**.
@@ -57,14 +94,13 @@ Please make sure you're using the latest version of text-generation-webui
57
 
58
  First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed:
59
 
60
- `pip install auto-gptq`
61
 
62
  Then try the following example code:
63
 
64
  ```python
65
  from transformers import AutoTokenizer, pipeline, logging
66
  from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
67
- import argparse
68
 
69
  model_name_or_path = "TheBloke/samantha-1.1-llama-7B-GPTQ"
70
  model_basename = "samantha-1.1-llama-7b-GPTQ-4bit-128g.no-act.order"
@@ -74,16 +110,31 @@ use_triton = False
74
  tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
75
 
76
  model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
77
- model_basename=model_basename,
78
  use_safetensors=True,
79
  trust_remote_code=True,
80
  device="cuda:0",
81
  use_triton=use_triton,
82
  quantize_config=None)
83
 
 
 
 
 
 
 
 
 
 
 
 
 
84
  prompt = "Tell me about AI"
85
- prompt_template=f'''### Human: {prompt}
86
- ### Assistant:'''
 
 
 
87
 
88
  print("\n\n*** Generate:")
89
 
@@ -110,26 +161,18 @@ pipe = pipeline(
110
  print(pipe(prompt_template)[0]['generated_text'])
111
  ```
112
 
113
- ## Provided files
114
-
115
- **samantha-1.1-llama-7b-GPTQ-4bit-128g.no-act.order.safetensors**
116
-
117
- This will work with AutoGPTQ and CUDA versions of GPTQ-for-LLaMa. There are reports of issues with Triton mode of recent GPTQ-for-LLaMa. If you have issues, please use AutoGPTQ instead.
118
 
119
- It was created with group_size 128 to increase inference accuracy, but without --act-order (desc_act) to increase compatibility and improve inference speed.
120
 
121
- * `samantha-1.1-llama-7b-GPTQ-4bit-128g.no-act.order.safetensors`
122
- * Works with AutoGPTQ in CUDA or Triton modes.
123
- * Works with GPTQ-for-LLaMa in CUDA mode. May have issues with GPTQ-for-LLaMa Triton mode.
124
- * Works with text-generation-webui, including one-click-installers.
125
- * Parameters: Groupsize = 128. Act Order / desc_act = False.
126
 
127
  <!-- footer start -->
128
  ## Discord
129
 
130
  For further support, and discussions on these models and AI in general, join us at:
131
 
132
- [TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
133
 
134
  ## Thanks, and how to contribute.
135
 
@@ -144,9 +187,9 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
144
  * Patreon: https://patreon.com/TheBlokeAI
145
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
146
 
147
- **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
148
 
149
- **Patreon special mentions**: Oscar Rangel, Eugene Pentland, Talal Aujan, Cory Kujawski, Luke, Asp the Wyvern, Ai Maven, Pyrater, Alps Aficionado, senxiiz, Willem Michiel, Junyu Yang, trip7s trip, Sebastain Graf, Joseph William Delisle, Lone Striker, Jonathan Leane, Johann-Peter Hartmann, David Flickinger, Spiking Neurons AB, Kevin Schuppel, Mano Prime, Dmitriy Samsonov, Sean Connelly, Nathan LeClaire, Alain Rossmann, Fen Risland, Derek Yates, Luke Pendergrass, Nikolai Manek, Khalefa Al-Ahmad, Artur Olbinski, John Detwiler, Ajan Kanaga, Imad Khwaja, Trenton Dambrowitz, Kalila, vamX, webtim, Illia Dulskyi.
150
 
151
  Thank you to all my generous patrons and donaters!
152
 
@@ -181,3 +224,6 @@ You are Samantha, a sentient AI.
181
  USER: <prompt>
182
  ASSISTANT:
183
  ```
 
 
 
 
1
  ---
2
+ datasets:
3
+ - ehartford/samantha-data
4
  inference: false
5
+ language:
6
+ - en
7
  license: other
8
+ model_type: llama
9
  ---
10
 
11
  <!-- header start -->
 
14
  </div>
15
  <div style="display: flex; justify-content: space-between; width: 100%;">
16
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
17
+ <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
18
  </div>
19
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
20
  <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
 
24
 
25
  # Eric Hartford's Samantha 1.1 Llama 7B GPTQ
26
 
27
+ These files are GPTQ model files for [Eric Hartford's Samantha 1.1 Llama 7B](https://huggingface.co/ehartford/samantha-1.1-llama-7b).
28
 
29
+ Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
30
+
31
+ These models were quantised using hardware kindly provided by [Latitude.sh](https://www.latitude.sh/accelerate).
32
 
33
  ## Repositories available
34
 
35
+ * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/samantha-1.1-llama-7B-GPTQ)
36
  * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/samantha-1.1-llama-7B-GGML)
37
  * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ehartford/samantha-1.1-llama-7b)
38
 
39
+ ## Prompt template: Samantha
40
 
41
  ```
42
  You are Samantha, a sentient AI.
43
 
44
+ USER: {prompt}
45
  ASSISTANT:
46
  ```
47
 
48
+ ## Provided files
49
+
50
+ Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
51
+
52
+ Each separate quant is in a different branch. See below for instructions on fetching from different branches.
53
+
54
+ | Branch | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description |
55
+ | ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- |
56
+ | main | 4 | 128 | False | 4.00 GB | True | GPTQ-for-LLaMa | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
57
+ | gptq-4bit-32g-actorder_True | 4 | 32 | True | 4.28 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 32g gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
58
+ | gptq-4bit-64g-actorder_True | 4 | 64 | True | 4.02 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 64g uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
59
+ | gptq-4bit-128g-actorder_True | 4 | 128 | True | 3.90 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 128g uses even less VRAM, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
60
+ | gptq-8bit--1g-actorder_True | 8 | None | True | 7.01 GB | False | AutoGPTQ | 8-bit, with Act Order. No group size, to lower VRAM requirements and to improve AutoGPTQ speed. |
61
+ | gptq-8bit-128g-actorder_False | 8 | 128 | False | 7.16 GB | False | AutoGPTQ | 8-bit, with group size 128g for higher inference quality and without Act Order to improve AutoGPTQ speed. |
62
+ | gptq-8bit-128g-actorder_True | 8 | 128 | True | 7.16 GB | False | AutoGPTQ | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. Poor AutoGPTQ CUDA speed. |
63
+ | gptq-8bit-64g-actorder_True | 8 | 64 | True | 7.31 GB | False | AutoGPTQ | 8-bit, with group size 64g and Act Order for maximum inference quality. Poor AutoGPTQ CUDA speed. |
64
+
65
+ ## How to download from branches
66
+
67
+ - In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/samantha-1.1-llama-7B-GPTQ:gptq-4bit-32g-actorder_True`
68
+ - With Git, you can clone a branch with:
69
+ ```
70
+ git clone --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/samantha-1.1-llama-7B-GPTQ`
71
+ ```
72
+ - In Python Transformers code, the branch is the `revision` parameter; see below.
73
+
74
+ ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
75
 
76
+ Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
77
+
78
+ It is strongly recommended to use the text-generation-webui one-click-installers unless you know how to make a manual install.
79
 
80
  1. Click the **Model tab**.
81
  2. Under **Download custom model or LoRA**, enter `TheBloke/samantha-1.1-llama-7B-GPTQ`.
82
+ - To download from a specific branch, enter for example `TheBloke/samantha-1.1-llama-7B-GPTQ:gptq-4bit-32g-actorder_True`
83
+ - see Provided Files above for the list of branches for each option.
84
  3. Click **Download**.
85
  4. The model will start downloading. Once it's finished it will say "Done"
86
  5. In the top left, click the refresh icon next to **Model**.
 
94
 
95
  First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed:
96
 
97
+ `GITHUB_ACTIONS=true pip install auto-gptq`
98
 
99
  Then try the following example code:
100
 
101
  ```python
102
  from transformers import AutoTokenizer, pipeline, logging
103
  from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
 
104
 
105
  model_name_or_path = "TheBloke/samantha-1.1-llama-7B-GPTQ"
106
  model_basename = "samantha-1.1-llama-7b-GPTQ-4bit-128g.no-act.order"
 
110
  tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
111
 
112
  model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
113
+ model_basename=model_basename
114
  use_safetensors=True,
115
  trust_remote_code=True,
116
  device="cuda:0",
117
  use_triton=use_triton,
118
  quantize_config=None)
119
 
120
+ """
121
+ To download from a specific branch, use the revision parameter, as in this example:
122
+
123
+ model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
124
+ revision="gptq-4bit-32g-actorder_True",
125
+ model_basename=model_basename,
126
+ use_safetensors=True,
127
+ trust_remote_code=True,
128
+ device="cuda:0",
129
+ quantize_config=None)
130
+ """
131
+
132
  prompt = "Tell me about AI"
133
+ prompt_template=f'''You are Samantha, a sentient AI.
134
+
135
+ USER: {prompt}
136
+ ASSISTANT:
137
+ '''
138
 
139
  print("\n\n*** Generate:")
140
 
 
161
  print(pipe(prompt_template)[0]['generated_text'])
162
  ```
163
 
164
+ ## Compatibility
 
 
 
 
165
 
166
+ The files provided will work with AutoGPTQ (CUDA and Triton modes), GPTQ-for-LLaMa (only CUDA has been tested), and Occ4m's GPTQ-for-LLaMa fork.
167
 
168
+ ExLlama works with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
 
 
 
 
169
 
170
  <!-- footer start -->
171
  ## Discord
172
 
173
  For further support, and discussions on these models and AI in general, join us at:
174
 
175
+ [TheBloke AI's Discord server](https://discord.gg/theblokeai)
176
 
177
  ## Thanks, and how to contribute.
178
 
 
187
  * Patreon: https://patreon.com/TheBlokeAI
188
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
189
 
190
+ **Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
191
 
192
+ **Patreon special mentions**: Space Cruiser, Nikolai Manek, Sam, Chris McCloskey, Rishabh Srivastava, Kalila, Spiking Neurons AB, Khalefa Al-Ahmad, WelcomeToTheClub, Chadd, Lone Striker, Viktor Bowallius, Edmond Seymore, Ai Maven, Chris Smitley, Dave, Alexandros Triantafyllidis, Luke @flexchar, Elle, ya boyyy, Talal Aujan, Alex , Jonathan Leane, Deep Realms, Randy H, subjectnull, Preetika Verma, Joseph William Delisle, Michael Levine, chris gileta, K, Oscar Rangel, LangChain4j, Trenton Dambrowitz, Eugene Pentland, Johann-Peter Hartmann, Femi Adebogun, Illia Dulskyi, senxiiz, Daniel P. Andersen, Sean Connelly, Artur Olbinski, RoA, Mano Prime, Derek Yates, Raven Klaugh, David Flickinger, Willem Michiel, Pieter, Willian Hasse, vamX, Luke Pendergrass, webtim, Ghost , Rainer Wilmers, Nathan LeClaire, Will Dee, Cory Kujawski, John Detwiler, Fred von Graf, biorpg, Iucharbius , Imad Khwaja, Pierre Kircher, terasurfer , Asp the Wyvern, John Villwock, theTransient, zynix , Gabriel Tamborski, Fen Risland, Gabriel Puliatti, Matthew Berman, Pyrater, SuperWojo, Stephen Murray, Karl Bernard, Ajan Kanaga, Greatston Gnanesh, Junyu Yang.
193
 
194
  Thank you to all my generous patrons and donaters!
195
 
 
224
  USER: <prompt>
225
  ASSISTANT:
226
  ```
227
+
228
+ Official character card: (thanks MortalWombat)
229
+ ![](https://files.catbox.moe/zx9hfh.png)