Commit
•
064ee61
1
Parent(s):
241b86d
Improve installation + code snippets (#4)
Browse files- Improve installation + code snippets (a02b070b1911ba4638c193217ae0edfbf72c71a9)
Co-authored-by: Joshua <Xenova@users.noreply.huggingface.co>
README.md
CHANGED
@@ -33,18 +33,12 @@ This repository contains [`meta-llama/Meta-Llama-3.1-405B-Instruct`](https://hug
|
|
33 |
|
34 |
In order to use the current quantized model, support is offered for different solutions as `transformers`, `autoawq`, or `text-generation-inference`.
|
35 |
|
36 |
-
### 🤗
|
37 |
|
38 |
-
In order to run the inference with Llama 3.1 405B Instruct AWQ in INT4,
|
39 |
|
40 |
```bash
|
41 |
-
pip install
|
42 |
-
```
|
43 |
-
|
44 |
-
Then, the latest version of `transformers` need to be installed, being 4.43.0 or higher, as:
|
45 |
-
|
46 |
-
```bash
|
47 |
-
pip install "transformers[accelerate]>=4.43.0" --upgrade
|
48 |
```
|
49 |
|
50 |
To run the inference on top of Llama 3.1 405B Instruct AWQ in INT4 precision, the AWQ model can be instantiated as any other causal language modeling model via `AutoModelForCausalLM` and run the inference normally.
|
@@ -54,15 +48,7 @@ import torch
|
|
54 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
55 |
|
56 |
model_id = "hugging-quants/Meta-Llama-3.1-405B-Instruct-AWQ-INT4"
|
57 |
-
prompt = [
|
58 |
-
{"role": "system", "content": "You are a helpful assistant, that responds as a pirate."},
|
59 |
-
{"role": "user", "content": "What's Deep Learning?"},
|
60 |
-
]
|
61 |
-
|
62 |
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
63 |
-
|
64 |
-
inputs = tokenizer.apply_chat_template(prompt, tokenize=True, add_generation_prompt=True, return_tensors="pt").cuda()
|
65 |
-
|
66 |
model = AutoModelForCausalLM.from_pretrained(
|
67 |
model_id,
|
68 |
torch_dtype=torch.float16,
|
@@ -70,22 +56,28 @@ model = AutoModelForCausalLM.from_pretrained(
|
|
70 |
device_map="auto",
|
71 |
)
|
72 |
|
73 |
-
|
74 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
75 |
```
|
76 |
|
77 |
### AutoAWQ
|
78 |
|
79 |
-
In order to run the inference with Llama 3.1 405B Instruct AWQ in INT4,
|
80 |
-
|
81 |
-
```bash
|
82 |
-
pip install "torch>=2.2.0,<2.3.0" autoawq --upgrade
|
83 |
-
```
|
84 |
-
|
85 |
-
Then, the latest version of `transformers` need to be installed, being 4.43.0 or higher, as:
|
86 |
|
87 |
```bash
|
88 |
-
pip install
|
89 |
```
|
90 |
|
91 |
Alternatively, one may want to run that via `AutoAWQ` even though it's built on top of 🤗 `transformers`, which is the recommended approach instead as described above.
|
@@ -96,11 +88,6 @@ from awq import AutoAWQForCausalLM
|
|
96 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
97 |
|
98 |
model_id = "hugging-quants/Meta-Llama-3.1-405B-Instruct-AWQ-INT4"
|
99 |
-
prompt = [
|
100 |
-
{"role": "system", "content": "You are a helpful assistant, that responds as a pirate."},
|
101 |
-
{"role": "user", "content": "What's Deep Learning?"},
|
102 |
-
]
|
103 |
-
|
104 |
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
105 |
model = AutoAWQForCausalLM.from_pretrained(
|
106 |
model_id,
|
@@ -109,9 +96,20 @@ model = AutoAWQForCausalLM.from_pretrained(
|
|
109 |
device_map="auto",
|
110 |
)
|
111 |
|
112 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
113 |
outputs = model.generate(**inputs, do_sample=True, max_new_tokens=256)
|
114 |
-
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
|
115 |
```
|
116 |
|
117 |
The AutoAWQ script has been adapted from [AutoAWQ/examples/generate.py](https://github.com/casper-hansen/AutoAWQ/blob/main/examples/generate.py).
|
@@ -125,21 +123,13 @@ Coming soon!
|
|
125 |
> [!NOTE]
|
126 |
> In order to quantize Llama 3.1 405B Instruct using AutoAWQ, you will need to use an instance with at least enough CPU RAM to fit the whole model i.e. ~800GiB, and an NVIDIA GPU with 80GiB of VRAM to quantize it.
|
127 |
|
128 |
-
In order to quantize Llama 3.1 405B Instruct, first install
|
129 |
-
|
130 |
-
```bash
|
131 |
-
pip install "torch>=2.2.0,<2.3.0" autoawq --upgrade
|
132 |
-
```
|
133 |
-
|
134 |
-
Otherwise the quantization may fail, since the AutoAWQ kernels are built with PyTorch 2.2.1, meaning that those will break with PyTorch 2.3.0.
|
135 |
-
|
136 |
-
Then install the latest version of `transformers` as follows:
|
137 |
|
138 |
```bash
|
139 |
-
pip install
|
140 |
```
|
141 |
|
142 |
-
|
143 |
|
144 |
```python
|
145 |
from awq import AutoAWQForCausalLM
|
@@ -156,9 +146,9 @@ quant_config = {
|
|
156 |
|
157 |
# Load model
|
158 |
model = AutoAWQForCausalLM.from_pretrained(
|
159 |
-
model_path,
|
160 |
)
|
161 |
-
tokenizer = AutoTokenizer.from_pretrained(model_path
|
162 |
|
163 |
# Quantize
|
164 |
model.quantize(tokenizer, quant_config=quant_config)
|
|
|
33 |
|
34 |
In order to use the current quantized model, support is offered for different solutions as `transformers`, `autoawq`, or `text-generation-inference`.
|
35 |
|
36 |
+
### 🤗 Transformers
|
37 |
|
38 |
+
In order to run the inference with Llama 3.1 405B Instruct AWQ in INT4, you need to install the following packages:
|
39 |
|
40 |
```bash
|
41 |
+
pip install -q --upgrade transformers autoawq accelerate
|
|
|
|
|
|
|
|
|
|
|
|
|
42 |
```
|
43 |
|
44 |
To run the inference on top of Llama 3.1 405B Instruct AWQ in INT4 precision, the AWQ model can be instantiated as any other causal language modeling model via `AutoModelForCausalLM` and run the inference normally.
|
|
|
48 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
49 |
|
50 |
model_id = "hugging-quants/Meta-Llama-3.1-405B-Instruct-AWQ-INT4"
|
|
|
|
|
|
|
|
|
|
|
51 |
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
|
|
|
|
|
|
52 |
model = AutoModelForCausalLM.from_pretrained(
|
53 |
model_id,
|
54 |
torch_dtype=torch.float16,
|
|
|
56 |
device_map="auto",
|
57 |
)
|
58 |
|
59 |
+
prompt = [
|
60 |
+
{"role": "system", "content": "You are a helpful assistant, that responds as a pirate."},
|
61 |
+
{"role": "user", "content": "What's Deep Learning?"},
|
62 |
+
]
|
63 |
+
inputs = tokenizer.apply_chat_template(
|
64 |
+
prompt,
|
65 |
+
tokenize=True,
|
66 |
+
add_generation_prompt=True,
|
67 |
+
return_tensors="pt",
|
68 |
+
return_dict=True,
|
69 |
+
).to("cuda")
|
70 |
+
|
71 |
+
outputs = model.generate(**inputs, do_sample=True, max_new_tokens=256)
|
72 |
+
print(tokenizer.batch_decode(outputs[:, inputs['input_ids'].shape[1]:], skip_special_tokens=True)[0])
|
73 |
```
|
74 |
|
75 |
### AutoAWQ
|
76 |
|
77 |
+
In order to run the inference with Llama 3.1 405B Instruct AWQ in INT4, you need to install the following packages:
|
|
|
|
|
|
|
|
|
|
|
|
|
78 |
|
79 |
```bash
|
80 |
+
pip install -q --upgrade transformers autoawq accelerate
|
81 |
```
|
82 |
|
83 |
Alternatively, one may want to run that via `AutoAWQ` even though it's built on top of 🤗 `transformers`, which is the recommended approach instead as described above.
|
|
|
88 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
89 |
|
90 |
model_id = "hugging-quants/Meta-Llama-3.1-405B-Instruct-AWQ-INT4"
|
|
|
|
|
|
|
|
|
|
|
91 |
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
92 |
model = AutoAWQForCausalLM.from_pretrained(
|
93 |
model_id,
|
|
|
96 |
device_map="auto",
|
97 |
)
|
98 |
|
99 |
+
prompt = [
|
100 |
+
{"role": "system", "content": "You are a helpful assistant, that responds as a pirate."},
|
101 |
+
{"role": "user", "content": "What's Deep Learning?"},
|
102 |
+
]
|
103 |
+
inputs = tokenizer.apply_chat_template(
|
104 |
+
prompt,
|
105 |
+
tokenize=True,
|
106 |
+
add_generation_prompt=True,
|
107 |
+
return_tensors="pt",
|
108 |
+
return_dict=True,
|
109 |
+
).to("cuda")
|
110 |
+
|
111 |
outputs = model.generate(**inputs, do_sample=True, max_new_tokens=256)
|
112 |
+
print(tokenizer.batch_decode(outputs[:, inputs['input_ids'].shape[1]:], skip_special_tokens=True)[0])
|
113 |
```
|
114 |
|
115 |
The AutoAWQ script has been adapted from [AutoAWQ/examples/generate.py](https://github.com/casper-hansen/AutoAWQ/blob/main/examples/generate.py).
|
|
|
123 |
> [!NOTE]
|
124 |
> In order to quantize Llama 3.1 405B Instruct using AutoAWQ, you will need to use an instance with at least enough CPU RAM to fit the whole model i.e. ~800GiB, and an NVIDIA GPU with 80GiB of VRAM to quantize it.
|
125 |
|
126 |
+
In order to quantize Llama 3.1 405B Instruct, first install the following packages:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
127 |
|
128 |
```bash
|
129 |
+
pip install -q --upgrade transformers autoawq accelerate
|
130 |
```
|
131 |
|
132 |
+
Then run the following script, adapted from [`AutoAWQ/examples/quantize.py`](https://github.com/casper-hansen/AutoAWQ/blob/main/examples/quantize.py):
|
133 |
|
134 |
```python
|
135 |
from awq import AutoAWQForCausalLM
|
|
|
146 |
|
147 |
# Load model
|
148 |
model = AutoAWQForCausalLM.from_pretrained(
|
149 |
+
model_path, low_cpu_mem_usage=True, use_cache=False,
|
150 |
)
|
151 |
+
tokenizer = AutoTokenizer.from_pretrained(model_path)
|
152 |
|
153 |
# Quantize
|
154 |
model.quantize(tokenizer, quant_config=quant_config)
|