OPEA
/

Safetensors
sys-lpot-val commited on
Commit
b162b49
1 Parent(s): 79441fe

upload auto_gptq format

Browse files
.gitattributes CHANGED
@@ -33,3 +33,11 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ added_tokens.json filter=lfs diff=lfs merge=lfs -text
37
+ config.json filter=lfs diff=lfs merge=lfs -text
38
+ generation_config.json filter=lfs diff=lfs merge=lfs -text
39
+ quantize_config.json filter=lfs diff=lfs merge=lfs -text
40
+ special_tokens_map.json filter=lfs diff=lfs merge=lfs -text
41
+ tokenizer_config.json filter=lfs diff=lfs merge=lfs -text
42
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
43
+ vocab.json filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,3 +1,155 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - NeelNanda/pile-10k
5
+ ---
6
+
7
+ ## Model Details
8
+
9
+ This model is an int4 model with group_size 128 of [Qwen/Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct) generated by [intel/auto-round](https://github.com/intel/auto-round), auto-round is needed to run this model
10
+
11
+ ## How To Use
12
+
13
+ ### INT4 Inference
14
+
15
+
16
+
17
+ ```python
18
+ ##git clone https://github.com/intel/auto-round.git
19
+ ##cd auto-round && pip install -vvv --no-build-isolation -e .
20
+ from auto_round import AutoHfQuantizer ##must import
21
+ import torch
22
+ from transformers import AutoModelForCausalLM,AutoTokenizer
23
+ quantized_model_dir = "OPEA/Qwen2.5-72B-Instruct-int4-inc"
24
+ tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir)
25
+
26
+ model = AutoModelForCausalLM.from_pretrained(
27
+ quantized_model_dir,
28
+ torch_dtype='auto',
29
+ device_map="auto",
30
+ )
31
+
32
+ ##import habana_frameworks.torch.core as htcore ## uncommnet it for HPU
33
+ ##import habana_frameworks.torch.hpu as hthpu ## uncommnet it for HPU
34
+ ##model = model.to(torch.bfloat16).to("hpu") ## uncommnet it for HPU
35
+
36
+ prompt = "There is a girl who likes adventure,"
37
+ messages = [
38
+ {"role": "system", "content": "You are a helpful assistant."},
39
+ {"role": "user", "content": prompt}
40
+ ]
41
+
42
+ text = tokenizer.apply_chat_template(
43
+ messages,
44
+ tokenize=False,
45
+ add_generation_prompt=True
46
+ )
47
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
48
+
49
+ generated_ids = model.generate(
50
+ model_inputs.input_ids,
51
+ max_new_tokens=50, ##change this to align with the official usage
52
+ do_sample=False ##change this to align with the official usage
53
+ )
54
+ generated_ids = [
55
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
56
+ ]
57
+
58
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
59
+ print(response)
60
+
61
+ ##prompt = "There is a girl who likes adventure,
62
+ ##That sounds like a wonderful trait! A girl who enjoys adventure likely has a spirit of curiosity, bravery, and a willingness to explore the unknown. Whether it's trying new activities, traveling to new places, or simply seeking out new experiences, her love
63
+
64
+ ##prompt = "Which one is bigger, 9.11 or 9.8"
65
+ ##To determine which number is bigger between 9.11 and 9.8, you can compare them digit by digit:
66
+ ##1. Compare the whole number parts:
67
+ ## - Both numbers have the same whole number part, which is 9.
68
+
69
+
70
+ ##prompt = "Once upon a time,"
71
+ ##Once upon a time, in a land far, far away, there was a kingdom known for its beauty and prosperity. The kingdom was ruled by a wise and just king who loved his people dearly. In the heart of the kingdom stood a magnificent castle
72
+
73
+ ##prompt = "请介绍一下阿里巴巴公司"
74
+ ##阿里巴巴集团是一家中国跨国科技公司,成立于1999年,总部位于中国杭州。阿里巴巴的业务涵盖了电子商务、零售、金融、物流、云计算等多个领域,旗下拥有包括淘宝网、天猫、菜鸟网络、阿里云等
75
+
76
+ ```
77
+
78
+ ### Evaluate the model
79
+
80
+ pip3 install lm-eval==0.4.4
81
+
82
+ ```bash
83
+ git clone https://github.com/intel/auto-round
84
+ cd auto-round
85
+ python -m auto_round --model "OPEA/Qwen2.5-72B-Instruct-int4-inc" --eval --eval_bs 16 --tasks lambada_openai,hellaswag,piqa,winogrande,truthfulqa_mc1,openbookqa,boolq,arc_easy,arc_challenge,mmlu,gsm8k,cmmlu,ceval-valid
86
+ ```
87
+
88
+ | Metric | BF16 | INT4 |
89
+ |:-------------- | :----: | :----: |
90
+ | Avg | 0.7582 | 0.7567 |
91
+ | mmlu | 0.8336 | 0.8306 |
92
+ | cmmlu | 0.8722 | 0.8638 |
93
+ | ceval-valid | 0.8982 | 0.8938 |
94
+ | lambada_openai | 0.7518 | 0.7603 |
95
+ | hellaswag | 0.7040 | 0.6970 |
96
+ | winogrande | 0.7577 | 0.7695 |
97
+ | piqa | 0.8335 | 0.8270 |
98
+ | truthfulqa_mc1 | 0.5288 | 0.5202 |
99
+ | openbookqa | 0.3860 | 0.3900 |
100
+ | boolq | 0.9046 | 0.9080 |
101
+ | arc_easy | 0.8603 | 0.8577 |
102
+ | arc_challenge | 0.6169 | 0.6109 |
103
+ | gsm8k 5 shots | 0.9090 | 0.9083
104
+
105
+
106
+
107
+
108
+ ### Reproduce the model
109
+
110
+ Here is the sample command to reproduce the model. We observed a larger accuracy drop in Chinese tasks and recommend using a high-quality Chinese dataset for calibration. However, we did not achieve better accuracy with some public datasets.
111
+
112
+ ```bash
113
+ git clone https://github.com/intel/auto-round
114
+ cd auto-round
115
+ python -m auto_round \
116
+ --model_name Qwen/Qwen2.5-72B-Instruct \
117
+ --device 0 \
118
+ --group_size 128 \
119
+ --nsamples 512 \
120
+ --bits 4 \
121
+ --iter 1000 \
122
+ --disable_eval \
123
+ --model_dtype "float16" \
124
+ --format 'auto_round' \
125
+ --output_dir "./tmp_autoround"
126
+ ```
127
+
128
+
129
+
130
+ ## Ethical Considerations and Limitations
131
+
132
+ The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
133
+
134
+ Therefore, before deploying any applications of the model, developers should perform safety testing.
135
+
136
+ ## Caveats and Recommendations
137
+
138
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
139
+
140
+ Here are a couple of useful links to learn more about Intel's AI software:
141
+
142
+ * Intel Neural Compressor [link](https://github.com/intel/neural-compressor)
143
+ * Intel Extension for Transformers [link](https://github.com/intel/intel-extension-for-transformers)
144
+
145
+ ## Disclaimer
146
+
147
+ The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes.
148
+
149
+
150
+
151
+ ## Cite
152
+
153
+ @article{cheng2023optimize, title={Optimize weight rounding via signed gradient descent for the quantization of llms}, author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao and Liu, Yi}, journal={arXiv preprint arXiv:2309.05516}, year={2023} }
154
+
155
+ [arxiv](https://arxiv.org/abs/2309.05516) [github](https://github.com/intel/auto-round)
added_tokens.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:58b54bbe36fc752f79a24a271ef66a0a0830054b4dfad94bde757d851968060b
3
+ size 605
config.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5cf5ce83e29709fbad13150e1f42418d71ae6fdd05d544339df913887681ed7c
3
+ size 1374
generation_config.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:519a8db66fcfd0d18731eb75169ff8393a6f9c479fd50142fb5ee2580bb9ae64
3
+ size 243
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ace6dcde1cdc7e7c1fd31bed63210d730ce354457fc97474a6fc9c40b3ded0d6
3
+ size 41502386384
quantize_config.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d3f8aef0ce8c1ca183f335662aeda631f88690c5097d3fefa7f03f7134ac0606
3
+ size 558
special_tokens_map.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:76862e765266b85aa9459767e33cbaf13970f327a0e88d1c65846c2ddd3a1ecd
3
+ size 613
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9c5ae00e602b8860cbd784ba82a8aa14e8feecec692e7076590d014d7b7fdafa
3
+ size 11421896
tokenizer_config.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7e88129d9769a0b14b1587a7d5e829fe93ac0e1511636471fdfc0811951418e6
3
+ size 7306
vocab.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ca10d7e9fb3ed18575dd1e277a2579c16d108e32f27439684afa0e10b1440910
3
+ size 2776833