Edit model card

This is an INT4 quantized version of the Phi-3.5-mini-instruct model. The Python packages used in creating this model are as follows:

openvino==2024.5.0rc1
optimum==1.23.3
optimum-intel==1.20.1
nncf==2.13.0
torch==2.5.1
transformers==4.46.2

This quantized model is created using the following command:

optimum-cli export openvino --model "microsoft/Phi-3.5-mini-instruct" --weight-format int4 --group-size 128 --sym --ratio 1 --all-layers ./Phi-3.5-mini-instruct-ov-int4

For more details, run the following command from your Python environment: optimum-cli export openvino --help

INFO:nncf:Statistics of the bitwidth distribution:

Num bits (N) % all parameters (layers) % ratio-defining parameters (layers)
4 100% (130 / 130) 100% (130 / 130)
Downloads last month
106
Inference API
Unable to determine this model's library. Check the docs .

Collection including jojo1899/Phi-3.5-mini-instruct-ov-int4