Adding Evaluation Results
#4
by
leaderboard-pr-bot
- opened
README.md
CHANGED
@@ -159,4 +159,17 @@ Please kindly cite using the following BibTeX:
|
|
159 |
journal={arXiv preprint arXiv:2302.13971},
|
160 |
year={2023}
|
161 |
}
|
162 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
159 |
journal={arXiv preprint arXiv:2302.13971},
|
160 |
year={2023}
|
161 |
}
|
162 |
+
```
|
163 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
164 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_pankajmathur__orca_mini_v3_7b)
|
165 |
+
|
166 |
+
| Metric | Value |
|
167 |
+
|-----------------------|---------------------------|
|
168 |
+
| Avg. | 47.98 |
|
169 |
+
| ARC (25-shot) | 56.91 |
|
170 |
+
| HellaSwag (10-shot) | 79.64 |
|
171 |
+
| MMLU (5-shot) | 52.37 |
|
172 |
+
| TruthfulQA (0-shot) | 50.51 |
|
173 |
+
| Winogrande (5-shot) | 74.27 |
|
174 |
+
| GSM8K (5-shot) | 7.13 |
|
175 |
+
| DROP (3-shot) | 15.06 |
|