OPEA
/

Safetensors
llama
4-bit precision
awq
cicdatopea commited on
Commit
051495b
1 Parent(s): 0244a6b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -13,7 +13,7 @@ This model is an int4 model with group_size 128 and symmetric quantization of [t
13
  from auto_round import AutoRoundConfig ##must import for auto_round format
14
  from transformers import AutoModelForCausalLM, AutoTokenizer
15
 
16
- quantized_model_dir = "OPEA/falcon3-3B-int4-sym-awq-inc"
17
  tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir)
18
  model = AutoModelForCausalLM.from_pretrained(
19
  quantized_model_dir,
@@ -70,7 +70,7 @@ text = "There is a girl who likes adventure,"
70
  pip3 install lm-eval==0.4.5
71
 
72
  ```bash
73
- auto-round --model "OPEA/falcon3-3B-int4-sym-awq-inc" --eval --eval_bs 16 --tasks lambada_openai,hellaswag,piqa,winogrande,truthfulqa_mc1,openbookqa,boolq,arc_easy,arc_challenge,mmlu
74
  ```
75
 
76
  | Metric | BF16 | INT4 |
 
13
  from auto_round import AutoRoundConfig ##must import for auto_round format
14
  from transformers import AutoModelForCausalLM, AutoTokenizer
15
 
16
+ quantized_model_dir = "OPEA/Falcon3-3B-Base-int4-sym-awq-inc"
17
  tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir)
18
  model = AutoModelForCausalLM.from_pretrained(
19
  quantized_model_dir,
 
70
  pip3 install lm-eval==0.4.5
71
 
72
  ```bash
73
+ auto-round --model "OPEA/Falcon3-3B-Base-int4-sym-awq-inc" --eval --eval_bs 16 --tasks lambada_openai,hellaswag,piqa,winogrande,truthfulqa_mc1,openbookqa,boolq,arc_easy,arc_challenge,mmlu
74
  ```
75
 
76
  | Metric | BF16 | INT4 |