File size: 344 Bytes
e3e92ce 2e92f37 2874ae2 75780d5 b9cb93d |
1 2 3 4 5 6 7 8 9 10 11 |
# Meta-Llama-3.1-8B-Instruct-AWQ
The following command was used to produce this model.
```bash
python quantize.py --model_dir /Meta-Llama-3.1-8B-Instruct \
--output_dir /Meta-Llama-3.1-8B-Instruct-AWQ \
--dtype bfloat16 \
--qformat int4_awq \
--awq_block_size 64
``` |