jojo1899 commited on
Commit
b3b6a0e
1 Parent(s): 837d24b

Updated README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -2,7 +2,7 @@
2
  license: llama2
3
  ---
4
 
5
- The Python packages used are as follows:
6
  ```
7
  onnx==1.16.0
8
  onnxruntime-directml==1.18.0
@@ -14,4 +14,4 @@ This quantized model is created using the following command:
14
  ```
15
  python -m onnxruntime_genai.models.builder -m meta-llama/Llama-2-13b-chat-hf -e dml -p int4 --extra_options {"int4_block_size"=128} -o ./Llama-2-13b-chat-hf-onnx-int4
16
  ```
17
- `onnxruntime_genai.models.builder` quantizes the model using `MatMul4BitsQuantizer` from `onnxruntime/quantization/matmul_4bits_quantizer.py` with the "DEFAULT" method.
 
2
  license: llama2
3
  ---
4
 
5
+ This is an INT4 quantized version of the `Llama-2-13b-chat-hf` model. The Python packages used in creating this model are as follows:
6
  ```
7
  onnx==1.16.0
8
  onnxruntime-directml==1.18.0
 
14
  ```
15
  python -m onnxruntime_genai.models.builder -m meta-llama/Llama-2-13b-chat-hf -e dml -p int4 --extra_options {"int4_block_size"=128} -o ./Llama-2-13b-chat-hf-onnx-int4
16
  ```
17
+ `onnxruntime_genai.models.builder` quantizes the model using `MatMul4BitsQuantizer` from `matmul_4bits_quantizer.py` of `onnxruntime/quantization/` with the `"DEFAULT"` method.