File size: 689 Bytes
051bd6f
 
 
 
b3b6a0e
051bd6f
1195ae5
 
 
 
 
837d24b
 
 
 
 
1195ae5
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
---
license: llama2
---

This is an INT4 quantized version of the `Llama-2-13b-chat-hf` model. The Python packages used in creating this model are as follows:
```
onnx==1.16.1
onnxruntime-directml==1.20.0
onnxruntime-genai-directml==0.4.0
torch==2.5.1
transformers==4.45.2
```
This quantized model is created using the following command:
```
python -m onnxruntime_genai.models.builder -m meta-llama/Llama-2-13b-chat-hf -e dml -p int4 --extra_options {"int4_block_size"=128} -o ./Llama-2-13b-chat-hf-onnx-int4
```
`onnxruntime_genai.models.builder` quantizes the model using `MatMul4BitsQuantizer` from `matmul_4bits_quantizer.py` of `onnxruntime/quantization/` with the `"DEFAULT"` method.