juvi21 commited on
Commit
6549b55
·
verified ·
1 Parent(s): b08215b

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +283 -0
README.md ADDED
@@ -0,0 +1,283 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - teknium/OpenHermes-2.5
5
+ tags:
6
+ - axolotl
7
+ - 01-ai/Yi-1.5-9B-Chat
8
+ - finetune
9
+ - gguf
10
+ ---
11
+
12
+
13
+ # Hermes-2.5-Yi-1.5-9B-Chat-GGUF
14
+
15
+ This model is a fine-tuned version of [01-ai/Yi-1.5-9B-Chat](https://huggingface.co/01-ai/Yi-1.5-9B-Chat) on the [teknium/OpenHermes-2.5](https://huggingface.co/datasets/teknium/OpenHermes-2.5) dataset.
16
+ I'm very happy with the results. The model now seems a lot smarter and "aware" in certain situations. It got quite an big edge on the AGIEval Benchmark for models in it's class this is quite good.
17
+ I plan to extend it's context length to 32k with POSE. This is the GGUF repo. You can find the main repo here [Hermes-2.5-Yi-1.5-9B-Chat](https://huggingface.co/juvi21/Hermes-2.5-Yi-1.5-9B-Chat).
18
+
19
+ ## Model Details
20
+
21
+ - **Base Model:** 01-ai/Yi-1.5-9B-Chat
22
+ - **chat-template:** chatml
23
+ - **Dataset:** teknium/OpenHermes-2.5
24
+ - **Sequence Length:** 8192 tokens
25
+ - **Training:**
26
+ - **Epochs:** 1
27
+ - **Hardware:** 4 Nodes x 4 NVIDIA A100 40GB GPUs
28
+ - **Duration:** 48:32:13
29
+ - **Cluster:** KIT SCC Cluster
30
+
31
+ ## Benchmark n_shots=0
32
+
33
+
34
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/659c4ecb413a1376bee2f661/0wv3AMaoete7ysT005n89.png)
35
+
36
+ | Benchmark | Score |
37
+ |-------------------|--------|
38
+ | ARC (Challenge) | 52.47% |
39
+ | ARC (Easy) | 81.65% |
40
+ | BoolQ | 87.22% |
41
+ | HellaSwag | 60.52% |
42
+ | OpenBookQA | 33.60% |
43
+ | PIQA | 81.12% |
44
+ | Winogrande | 72.22% |
45
+ | AGIEval | 38.46% |
46
+ | TruthfulQA | 44.22% |
47
+ | MMLU | 59.72% |
48
+ | IFEval | 47.96% |
49
+
50
+
51
+ For detailed benchmark results, including sub-categories and various metrics, please refer to the [full benchmark table](#full-benchmark-results) at the end of this README.
52
+
53
+ ## GGUF and Quantizations
54
+
55
+ - llama.cpp [b3166](https://github.com/ggerganov/llama.cpp/releases/tag/b3166)
56
+ - [juvi21/Hermes-2.5-Yi-1.5-9B-Chat-GGUF](https://huggingface.co/juvi21/Hermes-2.5-Yi-1.5-9B-Chat-GGUF) is availabe in:
57
+ - **F16** **Q8_0** **Q6_KQ5_K_M** **Q4_K_M** **Q3_K_M** **Q2_K**
58
+
59
+
60
+
61
+ ## Usage
62
+
63
+ To use this model, you can load it using the Hugging Face Transformers library:
64
+
65
+ ```python
66
+ from transformers import AutoModelForCausalLM, AutoTokenizer
67
+
68
+ model = AutoModelForCausalLM.from_pretrained("juvi21/Hermes-2.5-Yi-1.5-9B-Chat")
69
+ tokenizer = AutoTokenizer.from_pretrained("juvi21/Hermes-2.5-Yi-1.5-9B-Chat")
70
+
71
+ # Generate text
72
+ input_text = "What is the question to 42?"
73
+ inputs = tokenizer(input_text, return_tensors="pt")
74
+ outputs = model.generate(**inputs)
75
+ generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
76
+ print(generated_text)
77
+
78
+ ```
79
+
80
+ ## chatml
81
+ ```
82
+ <|im_start|>system
83
+ {system_prompt}<|im_end|>
84
+ <|im_start|>user
85
+ Knock Knock, who is there?<|im_end|>
86
+ <|im_start|>assistant
87
+ Hi there! <|im_end|>
88
+ ```
89
+ ## License
90
+
91
+ This model is released under the Apache 2.0 license.
92
+
93
+ ## Acknowledgements
94
+
95
+ Special thanks to:
96
+ - Teknium for the great OpenHermes-2.5 dataset
97
+ - 01-ai for their great model
98
+
99
+ ## Citation
100
+
101
+ If you use this model in your research, consider citing. Although definetly cite NousResearch and 01-ai:
102
+
103
+ ```bibtex
104
+ @misc{
105
+ author = {juvi21},
106
+ title = Hermes-2.5-Yi-1.5-9B-Chat},
107
+ year = {2024},
108
+ }
109
+ ```
110
+ ## full-benchmark-results
111
+
112
+ | Tasks |Version|Filter|n-shot| Metric | | Value | |Stderr|
113
+ |---------------------------------------|-------|------|-----:|-----------------------|---|------:|---|------|
114
+ |agieval |N/A |none | 0|acc |↑ | 0.5381|± |0.0049|
115
+ | | |none | 0|acc_norm |↑ | 0.5715|± |0.0056|
116
+ | - agieval_aqua_rat | 1|none | 0|acc |↑ | 0.3858|± |0.0306|
117
+ | | |none | 0|acc_norm |↑ | 0.3425|± |0.0298|
118
+ | - agieval_gaokao_biology | 1|none | 0|acc |↑ | 0.6048|± |0.0338|
119
+ | | |none | 0|acc_norm |↑ | 0.6000|± |0.0339|
120
+ | - agieval_gaokao_chemistry | 1|none | 0|acc |↑ | 0.4879|± |0.0348|
121
+ | | |none | 0|acc_norm |↑ | 0.4106|± |0.0343|
122
+ | - agieval_gaokao_chinese | 1|none | 0|acc |↑ | 0.5935|± |0.0314|
123
+ | | |none | 0|acc_norm |↑ | 0.5813|± |0.0315|
124
+ | - agieval_gaokao_english | 1|none | 0|acc |↑ | 0.8235|± |0.0218|
125
+ | | |none | 0|acc_norm |↑ | 0.8431|± |0.0208|
126
+ | - agieval_gaokao_geography | 1|none | 0|acc |↑ | 0.7085|± |0.0323|
127
+ | | |none | 0|acc_norm |↑ | 0.6985|± |0.0326|
128
+ | - agieval_gaokao_history | 1|none | 0|acc |↑ | 0.7830|± |0.0269|
129
+ | | |none | 0|acc_norm |↑ | 0.7660|± |0.0277|
130
+ | - agieval_gaokao_mathcloze | 1|none | 0|acc |↑ | 0.0508|± |0.0203|
131
+ | - agieval_gaokao_mathqa | 1|none | 0|acc |↑ | 0.3761|± |0.0259|
132
+ | | |none | 0|acc_norm |↑ | 0.3590|± |0.0256|
133
+ | - agieval_gaokao_physics | 1|none | 0|acc |↑ | 0.4950|± |0.0354|
134
+ | | |none | 0|acc_norm |↑ | 0.4700|± |0.0354|
135
+ | - agieval_jec_qa_ca | 1|none | 0|acc |↑ | 0.6557|± |0.0150|
136
+ | | |none | 0|acc_norm |↑ | 0.5926|± |0.0156|
137
+ | - agieval_jec_qa_kd | 1|none | 0|acc |↑ | 0.7310|± |0.0140|
138
+ | | |none | 0|acc_norm |↑ | 0.6610|± |0.0150|
139
+ | - agieval_logiqa_en | 1|none | 0|acc |↑ | 0.5177|± |0.0196|
140
+ | | |none | 0|acc_norm |↑ | 0.4839|± |0.0196|
141
+ | - agieval_logiqa_zh | 1|none | 0|acc |↑ | 0.4854|± |0.0196|
142
+ | | |none | 0|acc_norm |↑ | 0.4501|± |0.0195|
143
+ | - agieval_lsat_ar | 1|none | 0|acc |↑ | 0.2913|± |0.0300|
144
+ | | |none | 0|acc_norm |↑ | 0.2696|± |0.0293|
145
+ | - agieval_lsat_lr | 1|none | 0|acc |↑ | 0.7196|± |0.0199|
146
+ | | |none | 0|acc_norm |↑ | 0.6824|± |0.0206|
147
+ | - agieval_lsat_rc | 1|none | 0|acc |↑ | 0.7212|± |0.0274|
148
+ | | |none | 0|acc_norm |↑ | 0.6989|± |0.0280|
149
+ | - agieval_math | 1|none | 0|acc |↑ | 0.0910|± |0.0091|
150
+ | - agieval_sat_en | 1|none | 0|acc |↑ | 0.8204|± |0.0268|
151
+ | | |none | 0|acc_norm |↑ | 0.8301|± |0.0262|
152
+ | - agieval_sat_en_without_passage | 1|none | 0|acc |↑ | 0.5194|± |0.0349|
153
+ | | |none | 0|acc_norm |↑ | 0.4806|± |0.0349|
154
+ | - agieval_sat_math | 1|none | 0|acc |↑ | 0.5864|± |0.0333|
155
+ | | |none | 0|acc_norm |↑ | 0.5409|± |0.0337|
156
+ |arc_challenge | 1|none | 0|acc |↑ | 0.5648|± |0.0145|
157
+ | | |none | 0|acc_norm |↑ | 0.5879|± |0.0144|
158
+ |arc_easy | 1|none | 0|acc |↑ | 0.8241|± |0.0078|
159
+ | | |none | 0|acc_norm |↑ | 0.8165|± |0.0079|
160
+ |boolq | 2|none | 0|acc |↑ | 0.8624|± |0.0060|
161
+ |hellaswag | 1|none | 0|acc |↑ | 0.5901|± |0.0049|
162
+ | | |none | 0|acc_norm |↑ | 0.7767|± |0.0042|
163
+ |ifeval | 2|none | 0|inst_level_loose_acc |↑ | 0.5156|± |N/A |
164
+ | | |none | 0|inst_level_strict_acc |↑ | 0.4748|± |N/A |
165
+ | | |none | 0|prompt_level_loose_acc |↑ | 0.3863|± |0.0210|
166
+ | | |none | 0|prompt_level_strict_acc|↑ | 0.3309|± |0.0202|
167
+ |mmlu |N/A |none | 0|acc |↑ | 0.6942|± |0.0037|
168
+ | - abstract_algebra | 0|none | 0|acc |↑ | 0.4900|± |0.0502|
169
+ | - anatomy | 0|none | 0|acc |↑ | 0.6815|± |0.0402|
170
+ | - astronomy | 0|none | 0|acc |↑ | 0.7895|± |0.0332|
171
+ | - business_ethics | 0|none | 0|acc |↑ | 0.7600|± |0.0429|
172
+ | - clinical_knowledge | 0|none | 0|acc |↑ | 0.7132|± |0.0278|
173
+ | - college_biology | 0|none | 0|acc |↑ | 0.8056|± |0.0331|
174
+ | - college_chemistry | 0|none | 0|acc |↑ | 0.5300|± |0.0502|
175
+ | - college_computer_science | 0|none | 0|acc |↑ | 0.6500|± |0.0479|
176
+ | - college_mathematics | 0|none | 0|acc |↑ | 0.4100|± |0.0494|
177
+ | - college_medicine | 0|none | 0|acc |↑ | 0.6763|± |0.0357|
178
+ | - college_physics | 0|none | 0|acc |↑ | 0.5000|± |0.0498|
179
+ | - computer_security | 0|none | 0|acc |↑ | 0.8200|± |0.0386|
180
+ | - conceptual_physics | 0|none | 0|acc |↑ | 0.7489|± |0.0283|
181
+ | - econometrics | 0|none | 0|acc |↑ | 0.5877|± |0.0463|
182
+ | - electrical_engineering | 0|none | 0|acc |↑ | 0.6759|± |0.0390|
183
+ | - elementary_mathematics | 0|none | 0|acc |↑ | 0.6481|± |0.0246|
184
+ | - formal_logic | 0|none | 0|acc |↑ | 0.5873|± |0.0440|
185
+ | - global_facts | 0|none | 0|acc |↑ | 0.3900|± |0.0490|
186
+ | - high_school_biology | 0|none | 0|acc |↑ | 0.8613|± |0.0197|
187
+ | - high_school_chemistry | 0|none | 0|acc |↑ | 0.6453|± |0.0337|
188
+ | - high_school_computer_science | 0|none | 0|acc |↑ | 0.8300|± |0.0378|
189
+ | - high_school_european_history | 0|none | 0|acc |↑ | 0.8182|± |0.0301|
190
+ | - high_school_geography | 0|none | 0|acc |↑ | 0.8485|± |0.0255|
191
+ | - high_school_government_and_politics| 0|none | 0|acc |↑ | 0.8964|± |0.0220|
192
+ | - high_school_macroeconomics | 0|none | 0|acc |↑ | 0.7923|± |0.0206|
193
+ | - high_school_mathematics | 0|none | 0|acc |↑ | 0.4407|± |0.0303|
194
+ | - high_school_microeconomics | 0|none | 0|acc |↑ | 0.8655|± |0.0222|
195
+ | - high_school_physics | 0|none | 0|acc |↑ | 0.5298|± |0.0408|
196
+ | - high_school_psychology | 0|none | 0|acc |↑ | 0.8679|± |0.0145|
197
+ | - high_school_statistics | 0|none | 0|acc |↑ | 0.6898|± |0.0315|
198
+ | - high_school_us_history | 0|none | 0|acc |↑ | 0.8873|± |0.0222|
199
+ | - high_school_world_history | 0|none | 0|acc |↑ | 0.8312|± |0.0244|
200
+ | - human_aging | 0|none | 0|acc |↑ | 0.7085|± |0.0305|
201
+ | - human_sexuality | 0|none | 0|acc |↑ | 0.7557|± |0.0377|
202
+ | - humanities |N/A |none | 0|acc |↑ | 0.6323|± |0.0067|
203
+ | - international_law | 0|none | 0|acc |↑ | 0.8099|± |0.0358|
204
+ | - jurisprudence | 0|none | 0|acc |↑ | 0.7685|± |0.0408|
205
+ | - logical_fallacies | 0|none | 0|acc |↑ | 0.7975|± |0.0316|
206
+ | - machine_learning | 0|none | 0|acc |↑ | 0.5179|± |0.0474|
207
+ | - management | 0|none | 0|acc |↑ | 0.8835|± |0.0318|
208
+ | - marketing | 0|none | 0|acc |↑ | 0.9017|± |0.0195|
209
+ | - medical_genetics | 0|none | 0|acc |↑ | 0.8000|± |0.0402|
210
+ | - miscellaneous | 0|none | 0|acc |↑ | 0.8225|± |0.0137|
211
+ | - moral_disputes | 0|none | 0|acc |↑ | 0.7283|± |0.0239|
212
+ | - moral_scenarios | 0|none | 0|acc |↑ | 0.4860|± |0.0167|
213
+ | - nutrition | 0|none | 0|acc |↑ | 0.7353|± |0.0253|
214
+ | - other |N/A |none | 0|acc |↑ | 0.7287|± |0.0077|
215
+ | - philosophy | 0|none | 0|acc |↑ | 0.7170|± |0.0256|
216
+ | - prehistory | 0|none | 0|acc |↑ | 0.7346|± |0.0246|
217
+ | - professional_accounting | 0|none | 0|acc |↑ | 0.5638|± |0.0296|
218
+ | - professional_law | 0|none | 0|acc |↑ | 0.5163|± |0.0128|
219
+ | - professional_medicine | 0|none | 0|acc |↑ | 0.6875|± |0.0282|
220
+ | - professional_psychology | 0|none | 0|acc |↑ | 0.7092|± |0.0184|
221
+ | - public_relations | 0|none | 0|acc |↑ | 0.6727|± |0.0449|
222
+ | - security_studies | 0|none | 0|acc |↑ | 0.7347|± |0.0283|
223
+ | - social_sciences |N/A |none | 0|acc |↑ | 0.7910|± |0.0072|
224
+ | - sociology | 0|none | 0|acc |↑ | 0.8060|± |0.0280|
225
+ | - stem |N/A |none | 0|acc |↑ | 0.6581|± |0.0081|
226
+ | - us_foreign_policy | 0|none | 0|acc |↑ | 0.8900|± |0.0314|
227
+ | - virology | 0|none | 0|acc |↑ | 0.5301|± |0.0389|
228
+ | - world_religions | 0|none | 0|acc |↑ | 0.8012|± |0.0306|
229
+ |openbookqa | 1|none | 0|acc |↑ | 0.3280|± |0.0210|
230
+ | | |none | 0|acc_norm |↑ | 0.4360|± |0.0222|
231
+ |piqa | 1|none | 0|acc |↑ | 0.7982|± |0.0094|
232
+ | | |none | 0|acc_norm |↑ | 0.8074|± |0.0092|
233
+ |truthfulqa |N/A |none | 0|acc |↑ | 0.4746|± |0.0116|
234
+ | | |none | 0|bleu_acc |↑ | 0.4700|± |0.0175|
235
+ | | |none | 0|bleu_diff |↑ | 0.3214|± |0.6045|
236
+ | | |none | 0|bleu_max |↑ |22.5895|± |0.7122|
237
+ | | |none | 0|rouge1_acc |↑ | 0.4798|± |0.0175|
238
+ | | |none | 0|rouge1_diff |↑ | 0.0846|± |0.7161|
239
+ | | |none | 0|rouge1_max |↑ |48.7180|± |0.7833|
240
+ | | |none | 0|rouge2_acc |↑ | 0.4149|± |0.0172|
241
+ | | |none | 0|rouge2_diff |↑ |-0.4656|± |0.8375|
242
+ | | |none | 0|rouge2_max |↑ |34.0585|± |0.8974|
243
+ | | |none | 0|rougeL_acc |↑ | 0.4651|± |0.0175|
244
+ | | |none | 0|rougeL_diff |↑ |-0.2804|± |0.7217|
245
+ | | |none | 0|rougeL_max |↑ |45.2232|± |0.7971|
246
+ | - truthfulqa_gen | 3|none | 0|bleu_acc |↑ | 0.4700|± |0.0175|
247
+ | | |none | 0|bleu_diff |↑ | 0.3214|± |0.6045|
248
+ | | |none | 0|bleu_max |↑ |22.5895|± |0.7122|
249
+ | | |none | 0|rouge1_acc |↑ | 0.4798|± |0.0175|
250
+ | | |none | 0|rouge1_diff |↑ | 0.0846|± |0.7161|
251
+ | | |none | 0|rouge1_max |↑ |48.7180|± |0.7833|
252
+ | | |none | 0|rouge2_acc |↑ | 0.4149|± |0.0172|
253
+ | | |none | 0|rouge2_diff |↑ |-0.4656|± |0.8375|
254
+ | | |none | 0|rouge2_max |↑ |34.0585|± |0.8974|
255
+ | | |none | 0|rougeL_acc |↑ | 0.4651|± |0.0175|
256
+ | | |none | 0|rougeL_diff |↑ |-0.2804|± |0.7217|
257
+ | | |none | 0|rougeL_max |↑ |45.2232|± |0.7971|
258
+ | - truthfulqa_mc1 | 2|none | 0|acc |↑ | 0.3905|± |0.0171|
259
+ | - truthfulqa_mc2 | 2|none | 0|acc |↑ | 0.5587|± |0.0156|
260
+ |winogrande | 1|none | 0|acc |↑ | 0.7388|± |0.0123|
261
+
262
+ | Groups |Version|Filter|n-shot| Metric | | Value | |Stderr|
263
+ |------------------|-------|------|-----:|-----------|---|------:|---|-----:|
264
+ |agieval |N/A |none | 0|acc |↑ | 0.5381|± |0.0049|
265
+ | | |none | 0|acc_norm |↑ | 0.5715|± |0.0056|
266
+ |mmlu |N/A |none | 0|acc |↑ | 0.6942|± |0.0037|
267
+ | - humanities |N/A |none | 0|acc |↑ | 0.6323|± |0.0067|
268
+ | - other |N/A |none | 0|acc |↑ | 0.7287|± |0.0077|
269
+ | - social_sciences|N/A |none | 0|acc |↑ | 0.7910|± |0.0072|
270
+ | - stem |N/A |none | 0|acc |↑ | 0.6581|± |0.0081|
271
+ |truthfulqa |N/A |none | 0|acc |↑ | 0.4746|± |0.0116|
272
+ | | |none | 0|bleu_acc |↑ | 0.4700|± |0.0175|
273
+ | | |none | 0|bleu_diff |↑ | 0.3214|± |0.6045|
274
+ | | |none | 0|bleu_max |↑ |22.5895|± |0.7122|
275
+ | | |none | 0|rouge1_acc |↑ | 0.4798|± |0.0175|
276
+ | | |none | 0|rouge1_diff|↑ | 0.0846|± |0.7161|
277
+ | | |none | 0|rouge1_max |↑ |48.7180|± |0.7833|
278
+ | | |none | 0|rouge2_acc |↑ | 0.4149|± |0.0172|
279
+ | | |none | 0|rouge2_diff|↑ |-0.4656|± |0.8375|
280
+ | | |none | 0|rouge2_max |↑ |34.0585|± |0.8974|
281
+ | | |none | 0|rougeL_acc |↑ | 0.4651|± |0.0175|
282
+ | | |none | 0|rougeL_diff|↑ |-0.2804|± |0.7217|
283
+ | | |none | 0|rougeL_max |↑ |45.2232|± |0.7971|