winstxnhdw's picture
Update README.md
75780d5 verified
|
raw
history blame
340 Bytes

Meta-Llama-3.1-8B-Instruct-AWQ

The following command was used to produce this model.

python quantize.py --model_dir /Meta-Llama-3-8B-Instruct \
                   --output_dir /Meta-Llama-3-8B-Instruct-AWQ \
                   --dtype bfloat16 \
                   --qformat int4_awq \
                   --awq_block_size 64