n1ck-guo commited on
Commit
c703901
·
verified ·
1 Parent(s): 1a538fb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -83,8 +83,8 @@ pip3 install lm-eval==0.4.2
83
 
84
  ```bash
85
  git clone https://github.com/intel/auto-round
86
- cd auto-round/examples/language-modeling
87
- python3 eval_042/evluation.py --model_name "Intel/Qwen2.5-0.5B-Instruct-int4-inc" --eval_bs 16 --tasks lambada_openai,hellaswag,piqa,winogrande,truthfulqa_mc1,truthfulqa_mc2,openbookqa,boolq,arc_easy,arc_challenge,mmlu,gsm8k,cmmlu,ceval-valid
88
  ```
89
 
90
  | Metric | BF16 | INT4(group_size 128) | INT4(group_size 32) |
@@ -116,7 +116,7 @@ Here is the sample command to reproduce the model. We observed a larger accuracy
116
  ```bash
117
  git clone https://github.com/intel/auto-round
118
  pip install -vvv --no-build-isolation -e .
119
- auto_round \
120
  --model_name Qwen/Qwen2.5-0.5B-Instruct \
121
  --device 0 \
122
  --group_size 128 \
 
83
 
84
  ```bash
85
  git clone https://github.com/intel/auto-round
86
+ pip install -vvv --no-build-isolation -e .
87
+ auto-round --model "Intel/Qwen2.5-0.5B-Instruct-int4-inc" --eval --eval_bs 16 --tasks lambada_openai,hellaswag,piqa,winogrande,truthfulqa_mc1,truthfulqa_mc2,openbookqa,boolq,arc_easy,arc_challenge,mmlu,gsm8k,cmmlu,ceval-valid
88
  ```
89
 
90
  | Metric | BF16 | INT4(group_size 128) | INT4(group_size 32) |
 
116
  ```bash
117
  git clone https://github.com/intel/auto-round
118
  pip install -vvv --no-build-isolation -e .
119
+ auto-round \
120
  --model_name Qwen/Qwen2.5-0.5B-Instruct \
121
  --device 0 \
122
  --group_size 128 \