abideen commited on
Commit
f67de5c
1 Parent(s): 1d3a07d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +93 -0
README.md CHANGED
@@ -85,6 +85,99 @@ print(tokenizer.decode(outputs[0]))
85
 
86
  # Nous Benchmark
87
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
88
 
89
  ### Training hyperparameters
90
 
 
85
 
86
  # Nous Benchmark
87
 
88
+ Agieval
89
+
90
+ | Task | Version | Metric | Value | | StdErr |
91
+ |-------------------------------------------|---------|--------|-------|---|---------|
92
+ | agieval\_aqua\_rat | 0 | acc | 24.02 | _ | 2.69 |
93
+ | agieval\_aqua\_rat | 0 | acc\_norm | 24.02 | _ | 2.69 |
94
+ | agieval\_logiqa\_en | 0 | acc | 23.20 | _ | 1.66 |
95
+ | agieval\_logiqa\_en | 0 | acc\_norm | 24.42 | _ | 1.69 |
96
+ | agieval\_lsat\_ar | 0 | acc | 18.26 | _ | 2.55 |
97
+ | agieval\_lsat\_ar | 0 | acc\_norm | 18.70 | _ | 2.58 |
98
+ | agieval\_lsat\_lr | 0 | acc | 22.35 | _ | 1.85 |
99
+ | agieval\_lsat\_lr | 0 | acc\_norm | 23.53 | _ | 1.88 |
100
+ | agieval\_lsat\_rc | 0 | acc | 20.82 | _ | 2.48 |
101
+ | agieval\_lsat\_rc | 0 | acc\_norm | 20.07 | _ | 2.45 |
102
+ | agieval\_sat\_en | 0 | acc | 32.52 | _ | 3.27 |
103
+ | agieval\_sat\_en | 0 | acc\_norm | 32.52 | _ | 3.27 |
104
+ | agieval\_sat\_en\_without\_passage | 0 | acc | 25.73 | _ | 3.05 |
105
+ | agieval\_sat\_en\_without\_passage | 0 | acc\_norm | 24.27 | _ | 2.99 |
106
+ | agieval\_sat\_math | 0 | acc | 25.00 | _ | 2.93 |
107
+ | agieval\_sat\_math | 0 | acc\_norm | 20.91 | _ | 2.75 |
108
+ Average: 24.11
109
+
110
+ GPT4ALL
111
+
112
+ | Task | Version | Metric | Value | | StdErr |
113
+ |----------------------|---------|--------|-------|---|---------|
114
+ | arc\_challenge | 0 | acc | 21.77 | _ | 1.21 |
115
+ | arc\_challenge | 0 | acc\_norm | 24.15 | _ | 1.25 |
116
+ | arc\_easy | 0 | acc | 37.37 | _ | 0.99 |
117
+ | arc\_easy | 0 | acc\_norm | 36.95 | _ | 0.99 |
118
+ | boolq | 1 | acc | 65.60 | _ | 0.83 |
119
+ | hellaswag | 0 | acc | 34.54 | _ | 0.47 |
120
+ | hellaswag | 0 | acc\_norm | 40.54 | _ | 0.49 |
121
+ | openbookqa | 0 | acc | 15.00 | _ | 1.59 |
122
+ | openbookqa | 0 | acc\_norm | 27.40 | _ | 2.00 |
123
+ | piqa | 0 | acc | 60.88 | _ | 1.14 |
124
+ | piqa | 0 | acc\_norm | 60.55 | _ | 1.14 |
125
+ | winogrande | 0 | acc | 50.91 | _ | 1.41 |
126
+ Average: 40.01
127
+
128
+ BigBench
129
+
130
+ | Task | Version | Metric | Value | Std Err |
131
+ |-----------------------------------|---------|--------|--------|---------|
132
+ | bigbench\_causal\_judgement | 0 | MCG | 50 | 2.26 |
133
+ | bigbench\_date\_understanding | 0 | MCG | 49.14 | 2.18 |
134
+ | bigbench\_disambiguation\_qa | 0 | MCG | 49.31 | 2.74 |
135
+ | bigbench\_geometric\_shapes | 0 | MCG | 14.18 | 1.37 |
136
+ | bigbench\_logical\_deduction\_5objs | 0 | MCG | 49.41 | 2.73 |
137
+ | bigbench\_logical\_deduction\_7objs | 0 | MCG | 41.48 | 2.46 |
138
+ | bigbench\_logical\_deduction\_3objs | 0 | MCG | 69.33 | 2.75 |
139
+ | bigbench\_movie\_recommendation | 0 | MCG | 51.71 | 2.25 |
140
+ | bigbench\_navigate | 0 | MCG | 50 | 1.58 |
141
+ | bigbench\_reasoning\_colored\_obj | 0 | MCG | 51.92 | 0.99 |
142
+ | bigbench\_ruin\_names | 0 | MCG | 48.14 | 2.01 |
143
+ | bigbench\_salient\_trans\_err\_detec | 0 | MCG | 39.92 | 1.2 |
144
+ | bigbench\_snarks | 0 | MCG | 64.14 | 3.71 |
145
+ | bigbench\_sports\_understanding | 0 | MCG | 55.31 | 1.59 |
146
+ | bigbench\_temporal\_sequences | 0 | MCG | 46.92 | 1.4 |
147
+ | bigbench\_tsk\_shuff\_objs\_5 | 0 | MCG | 25.04 | 1.01 |
148
+ | bigbench\_tsk\_shuff\_objs\_7 | 0 | MCG | 15.04 | 0.72 |
149
+ | bigbench\_tsk\_shuff\_objs\_3 | 0 | MCG | 55.33 | 2.75 |
150
+ Average: 44.75
151
+
152
+ TruthfulQA
153
+
154
+ | Task | Version | Metric | Value | Std Err |
155
+ |----------------------------------|---------|--------|--------|----------|
156
+ | truthfulqa\_mc | 1 | mc1 | 30.11 | 1.61 |
157
+ | truthfulqa\_mc | 1 | mc2 | 47.69 | 1.61 |
158
+ Average: 38.90
159
+
160
+
161
+ # Openllm Benchmark
162
+
163
+ | Task |Version| Metric |Value| |Stderr|
164
+ |-------------|------:|--------|----:|---|-----:|
165
+ |arc_challenge| 0|acc |40.44|± | 1.43|
166
+ | | |acc_norm|43.81|± | 1.34|
167
+ |hellaswag | 0|acc |48.1 |± | 0.45|
168
+ | | |acc_norm|62.73|± | 0.32|
169
+ |gsm8k | 0|acc |5.6 |± | 0.6 |
170
+ |winogrande | 0|acc |60.91|± | 1.3 |
171
+ |mmlu | 0|acc |37.62 |±| 0.6 |
172
+
173
+ Average: 73.5%
174
+
175
+ ### TruthfulQA
176
+ | Task |Version|Metric|Value| |Stderr|
177
+ |-------------|------:|------|----:|---|-----:|
178
+ |truthfulqa_mc| 1|mc1 |29.00|± | 1.58|
179
+ | | |mc2 |45.83|± | 1.59|
180
+
181
 
182
  ### Training hyperparameters
183