abhinavnmagic commited on
Commit
bb83fe4
·
verified ·
1 Parent(s): 423c174

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -15
README.md CHANGED
@@ -28,7 +28,7 @@ license: llama3.1
28
  - **Model Developers:** Neural Magic
29
 
30
  Quantized version of [Meta-Llama-3.1-405B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-405B-Instruct).
31
- It achieves an average score of 86.01 on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) benchmark (version 1), whereas the unquantized model achieves 86.63.
32
 
33
  ### Model Optimizations
34
 
@@ -148,9 +148,9 @@ This version of the lm-evaluation-harness includes versions of ARC-Challenge, GS
148
  </td>
149
  <td>86.25
150
  </td>
151
- <td>85.97
152
  </td>
153
- <td>99.67%
154
  </td>
155
  </tr>
156
  <tr>
@@ -158,9 +158,9 @@ This version of the lm-evaluation-harness includes versions of ARC-Challenge, GS
158
  </td>
159
  <td>96.93
160
  </td>
161
- <td>95.39
162
  </td>
163
- <td>98.41%
164
  </td>
165
  </tr>
166
  <tr>
@@ -168,9 +168,9 @@ This version of the lm-evaluation-harness includes versions of ARC-Challenge, GS
168
  </td>
169
  <td>96.44
170
  </td>
171
- <td>95.83
172
  </td>
173
- <td>99.36%
174
  </td>
175
  </tr>
176
  <tr>
@@ -178,9 +178,9 @@ This version of the lm-evaluation-harness includes versions of ARC-Challenge, GS
178
  </td>
179
  <td>88.33
180
  </td>
181
- <td>88.16
182
  </td>
183
- <td>99.80%
184
  </td>
185
  </tr>
186
  <tr>
@@ -188,9 +188,9 @@ This version of the lm-evaluation-harness includes versions of ARC-Challenge, GS
188
  </td>
189
  <td>87.21
190
  </td>
191
- <td>85.95
192
  </td>
193
- <td>98.55%
194
  </td>
195
  </tr>
196
  <tr>
@@ -198,9 +198,9 @@ This version of the lm-evaluation-harness includes versions of ARC-Challenge, GS
198
  </td>
199
  <td>64.64
200
  </td>
201
- <td>64.75
202
  </td>
203
- <td>100.17%
204
  </td>
205
  </tr>
206
  <tr>
@@ -208,9 +208,9 @@ This version of the lm-evaluation-harness includes versions of ARC-Challenge, GS
208
  </td>
209
  <td><strong>86.63</strong>
210
  </td>
211
- <td><strong>86.01</strong>
212
  </td>
213
- <td><strong>99.28%</strong>
214
  </td>
215
  </tr>
216
  </table>
 
28
  - **Model Developers:** Neural Magic
29
 
30
  Quantized version of [Meta-Llama-3.1-405B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-405B-Instruct).
31
+ It achieves an average score of 86.47 on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) benchmark (version 1), whereas the unquantized model achieves 86.63.
32
 
33
  ### Model Optimizations
34
 
 
148
  </td>
149
  <td>86.25
150
  </td>
151
+ <td>86.17
152
  </td>
153
+ <td>99.90%
154
  </td>
155
  </tr>
156
  <tr>
 
158
  </td>
159
  <td>96.93
160
  </td>
161
+ <td>95.3
162
  </td>
163
+ <td>98.31%
164
  </td>
165
  </tr>
166
  <tr>
 
168
  </td>
169
  <td>96.44
170
  </td>
171
+ <td>96.05
172
  </td>
173
+ <td>99.59%
174
  </td>
175
  </tr>
176
  <tr>
 
178
  </td>
179
  <td>88.33
180
  </td>
181
+ <td>88.27
182
  </td>
183
+ <td>99.93%
184
  </td>
185
  </tr>
186
  <tr>
 
188
  </td>
189
  <td>87.21
190
  </td>
191
+ <td>87.76
192
  </td>
193
+ <td>100.63%
194
  </td>
195
  </tr>
196
  <tr>
 
198
  </td>
199
  <td>64.64
200
  </td>
201
+ <td>65.27
202
  </td>
203
+ <td>100.97%
204
  </td>
205
  </tr>
206
  <tr>
 
208
  </td>
209
  <td><strong>86.63</strong>
210
  </td>
211
+ <td><strong>86.47</strong>
212
  </td>
213
+ <td><strong>99.81%</strong>
214
  </td>
215
  </tr>
216
  </table>