Suparious commited on
Commit
2fbac2c
·
verified ·
1 Parent(s): be5a200

Adding model tags

Browse files
Files changed (1) hide show
  1. README.md +177 -1
README.md CHANGED
@@ -1,3 +1,179 @@
1
  ---
2
- license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
1
  ---
2
+ license: other
3
+ tags:
4
+ - axolotl
5
+ - generated_from_trainer
6
+ - Mistral
7
+ - instruct
8
+ - finetune
9
+ - chatml
10
+ - gpt4
11
+ - synthetic data
12
+ - science
13
+ - physics
14
+ - chemistry
15
+ - biology
16
+ - math
17
+ - quantized
18
+ - 4-bit
19
+ - AWQ
20
+ - autotrain_compatible
21
+ - endpoints_compatible
22
+ - text-generation-inference
23
+ base_model: alpindale/Mistral-7B-v0.2-hf
24
+ datasets:
25
+ - allenai/ai2_arc
26
+ - camel-ai/physics
27
+ - camel-ai/chemistry
28
+ - camel-ai/biology
29
+ - camel-ai/math
30
+ - metaeval/reclor
31
+ - openbookqa
32
+ - mandyyyyii/scibench
33
+ - derek-thomas/ScienceQA
34
+ - TIGER-Lab/ScienceEval
35
+ - jondurbin/airoboros-3.2
36
+ - LDJnr/Capybara
37
+ - Cot-Alpaca-GPT4-From-OpenHermes-2.5
38
+ - STEM-AI-mtl/Electrical-engineering
39
+ - knowrohit07/saraswati-stem
40
+ - sablo/oasst2_curated
41
+ - lmsys/lmsys-chat-1m
42
+ - TIGER-Lab/MathInstruct
43
+ - bigbio/med_qa
44
+ - meta-math/MetaMathQA-40K
45
+ - openbookqa
46
+ - piqa
47
+ - metaeval/reclor
48
+ - derek-thomas/ScienceQA
49
+ - scibench
50
+ - sciq
51
+ - Open-Orca/SlimOrca
52
+ - migtissera/Synthia-v1.3
53
+ - TIGER-Lab/ScienceEval
54
+ - allenai/WildChat
55
+ - microsoft/orca-math-word-problems-200k
56
+ - openchat/openchat_sharegpt4_dataset
57
+ - teknium/GPTeacher-General-Instruct
58
+ - m-a-p/CodeFeedback-Filtered-Instruction
59
+ model-index:
60
+ - name: Einstein-v5-v0.2-7B
61
+ results:
62
+ - task:
63
+ type: text-generation
64
+ name: Text Generation
65
+ dataset:
66
+ name: AI2 Reasoning Challenge (25-Shot)
67
+ type: ai2_arc
68
+ config: ARC-Challenge
69
+ split: test
70
+ args:
71
+ num_few_shot: 25
72
+ metrics:
73
+ - type: acc_norm
74
+ value: 60.92
75
+ name: normalized accuracy
76
+ source:
77
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v5-v0.2-7B
78
+ name: Open LLM Leaderboard
79
+ - task:
80
+ type: text-generation
81
+ name: Text Generation
82
+ dataset:
83
+ name: HellaSwag (10-Shot)
84
+ type: hellaswag
85
+ split: validation
86
+ args:
87
+ num_few_shot: 10
88
+ metrics:
89
+ - type: acc_norm
90
+ value: 80.99
91
+ name: normalized accuracy
92
+ source:
93
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v5-v0.2-7B
94
+ name: Open LLM Leaderboard
95
+ - task:
96
+ type: text-generation
97
+ name: Text Generation
98
+ dataset:
99
+ name: MMLU (5-Shot)
100
+ type: cais/mmlu
101
+ config: all
102
+ split: test
103
+ args:
104
+ num_few_shot: 5
105
+ metrics:
106
+ - type: acc
107
+ value: 61.02
108
+ name: accuracy
109
+ source:
110
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v5-v0.2-7B
111
+ name: Open LLM Leaderboard
112
+ - task:
113
+ type: text-generation
114
+ name: Text Generation
115
+ dataset:
116
+ name: TruthfulQA (0-shot)
117
+ type: truthful_qa
118
+ config: multiple_choice
119
+ split: validation
120
+ args:
121
+ num_few_shot: 0
122
+ metrics:
123
+ - type: mc2
124
+ value: 52.59
125
+ source:
126
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v5-v0.2-7B
127
+ name: Open LLM Leaderboard
128
+ - task:
129
+ type: text-generation
130
+ name: Text Generation
131
+ dataset:
132
+ name: Winogrande (5-shot)
133
+ type: winogrande
134
+ config: winogrande_xl
135
+ split: validation
136
+ args:
137
+ num_few_shot: 5
138
+ metrics:
139
+ - type: acc
140
+ value: 78.69
141
+ name: accuracy
142
+ source:
143
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v5-v0.2-7B
144
+ name: Open LLM Leaderboard
145
+ - task:
146
+ type: text-generation
147
+ name: Text Generation
148
+ dataset:
149
+ name: GSM8k (5-shot)
150
+ type: gsm8k
151
+ config: main
152
+ split: test
153
+ args:
154
+ num_few_shot: 5
155
+ metrics:
156
+ - type: acc
157
+ value: 59.67
158
+ name: accuracy
159
+ source:
160
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v5-v0.2-7B
161
+ name: Open LLM Leaderboard
162
+ quantized_by: Suparious
163
+ pipeline_tag: text-generation
164
+ model_creator: Weyaxi
165
+ model_name: Einstein-v5-v0.2-7B
166
+ inference: false
167
+ prompt_template: '<|im_start|>system
168
+
169
+ {system_message}<|im_end|>
170
+
171
+ <|im_start|>user
172
+
173
+ {prompt}<|im_end|>
174
+
175
+ <|im_start|>assistant
176
+
177
+ '
178
  ---
179
+ # Weyaxi/Einstein-v5-v0.2-7B AWQ