Initial GPTQ model commit
Browse files
README.md
CHANGED
@@ -42,9 +42,9 @@ Multiple GPTQ parameter permutations are provided; see Provided Files below for
|
|
42 |
## Prompt template: Llama-2-Chat
|
43 |
|
44 |
```
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
```
|
49 |
|
50 |
## Provided files
|
@@ -59,6 +59,10 @@ Each separate quant is in a different branch. See below for instructions on fet
|
|
59 |
| gptq-4bit-32g-actorder_True | 4 | 32 | True | 8.00 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 32g gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
|
60 |
| gptq-4bit-64g-actorder_True | 4 | 64 | True | 7.51 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 64g uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
|
61 |
| gptq-4bit-128g-actorder_True | 4 | 128 | True | 7.26 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 128g uses even less VRAM, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
|
|
|
|
|
|
|
|
|
62 |
|
63 |
## How to download from branches
|
64 |
|
@@ -128,9 +132,9 @@ model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
|
|
128 |
"""
|
129 |
|
130 |
prompt = "Tell me about AI"
|
131 |
-
prompt_template=f'''
|
132 |
-
|
133 |
-
|
134 |
'''
|
135 |
|
136 |
print("\n\n*** Generate:")
|
|
|
42 |
## Prompt template: Llama-2-Chat
|
43 |
|
44 |
```
|
45 |
+
SYSTEM: You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
|
46 |
+
USER: {prompt}
|
47 |
+
ASSISTANT:
|
48 |
```
|
49 |
|
50 |
## Provided files
|
|
|
59 |
| gptq-4bit-32g-actorder_True | 4 | 32 | True | 8.00 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 32g gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
|
60 |
| gptq-4bit-64g-actorder_True | 4 | 64 | True | 7.51 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 64g uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
|
61 |
| gptq-4bit-128g-actorder_True | 4 | 128 | True | 7.26 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 128g uses even less VRAM, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
|
62 |
+
| gptq-8bit-128g-actorder_True | 8 | 128 | True | 13.65 GB | False | AutoGPTQ | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. Poor AutoGPTQ CUDA speed. |
|
63 |
+
| gptq-8bit-64g-actorder_True | 8 | 64 | True | 13.95 GB | False | AutoGPTQ | 8-bit, with group size 64g and Act Order for maximum inference quality. Poor AutoGPTQ CUDA speed. |
|
64 |
+
| gptq-8bit-128g-actorder_False | 8 | 128 | False | 13.65 GB | False | AutoGPTQ | 8-bit, with group size 128g for higher inference quality and without Act Order to improve AutoGPTQ speed. |
|
65 |
+
| gptq-8bit--1g-actorder_True | 8 | None | True | 13.36 GB | False | AutoGPTQ | 8-bit, with Act Order. No group size, to lower VRAM requirements and to improve AutoGPTQ speed. |
|
66 |
|
67 |
## How to download from branches
|
68 |
|
|
|
132 |
"""
|
133 |
|
134 |
prompt = "Tell me about AI"
|
135 |
+
prompt_template=f'''SYSTEM: You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
|
136 |
+
USER: {prompt}
|
137 |
+
ASSISTANT:
|
138 |
'''
|
139 |
|
140 |
print("\n\n*** Generate:")
|