English
yintongl's picture
Update README.md
544a20c verified
|
raw
history blame
3.46 kB
metadata
license: mit
datasets:
  - NeelNanda/pile-10k
language:
  - en

Model Details

This model is an int4 model with group_size 128 of microsoft/Phi-3-mini-128k-instruct generated by intel/auto-round.

INT4 Inference with AutoGPTQ's Kernel

##pip install auto-gptq[triton] 
##pip install triton==2.2.0
from transformers import AutoModelForCausalLM, AutoTokenizer
quantized_model_dir = "Intel/Phi-3-mini-128k-instruct-int4-inc"
model = AutoModelForCausalLM.from_pretrained(quantized_model_dir,
                                             device_map="auto",
                                             trust_remote_code=False,
                                             )
tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir, use_fast=True)
print(tokenizer.decode(model.generate(**tokenizer("There is a girl who likes adventure,", return_tensors="pt").to(model.device),max_new_tokens=50)[0]))

Evaluate the model

Install lm-eval-harness from source, we used the git id 96d185fa6232a5ab685ba7c43e45d1dbb3bb906d

lm_eval --model hf --model_args pretrained="Intel/Phi-3-mini-128k-instruct-int4-inc",autogptq=True,gptq_use_triton=True --device cuda:0 --tasks lambada_openai,hellaswag,piqa,winogrande,truthfulqa_mc1,openbookqa,boolq,arc_easy,arc_challenge,mmlu --batch_size 32
Metric BF16 INT4
Avg. 0.6365 0.6300
mmlu 0.6247 0.6237
lambada_openai 0.6652 0.6433
hellaswag 0.5978 0.5859
winogrande 0.7277 0.7230
piqa 0.7895 0.7846
truthfulqa_mc1 0.3562 0.3562
openbookqa 0.3900 0.3800
boolq 0.8557 0.8489
arc_easy 0.8140 0.8199
arc_challenge 0.5444 0.5350

Reproduce the model

Here is the sample command to reproduce the model

git clone https://github.com/intel/auto-round
cd auto-round/examples/language-modeling
pip install -r requirements.txt
python3 main.py \
--model_name  microsoft/Phi-3-mini-128k-instruct \
--device 0 \
--group_size 128 \
--bits 4 \
--iters 200 \
--seqlen 4096 \
--minmax_lr 0.01 \
--deployment_device 'gpu' \
--gradient_accumulate_steps 2 \
--train_bs 4 \
--output_dir "./tmp_autoround" \

Caveats and Recommendations

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.

Here are a couple of useful links to learn more about Intel's AI software:

  • Intel Neural Compressor link
  • Intel Extension for Transformers link

Disclaimer

The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes.

Cite

@article{cheng2023optimize, title={Optimize weight rounding via signed gradient descent for the quantization of llms}, author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao}, journal={arXiv preprint arXiv:2309.05516}, year={2023} }

arxiv github