|
--- |
|
license: llama2 |
|
--- |
|
|
|
This is an INT4 quantized version of the `Llama-2-13b-chat-hf` model. The Python packages used in creating this model are as follows: |
|
``` |
|
onnx==1.16.0 |
|
onnxruntime-directml==1.18.0 |
|
onnxruntime-genai-directml==0.2.0 |
|
torch==2.3.0+cu121 |
|
transformers==4.40.1 |
|
``` |
|
This quantized model is created using the following command: |
|
``` |
|
python -m onnxruntime_genai.models.builder -m meta-llama/Llama-2-13b-chat-hf -e dml -p int4 --extra_options {"int4_block_size"=128} -o ./Llama-2-13b-chat-hf-onnx-int4 |
|
``` |
|
`onnxruntime_genai.models.builder` quantizes the model using `MatMul4BitsQuantizer` from `matmul_4bits_quantizer.py` of `onnxruntime/quantization/` with the `"DEFAULT"` method. |