Adding Evaluation Results
#1
by
leaderboard-pr-bot
- opened
README.md
CHANGED
@@ -55,4 +55,17 @@ The following hyperparameters were used during training:
|
|
55 |
- Transformers 4.38.0.dev0
|
56 |
- Pytorch 2.1.0+cu121
|
57 |
- Datasets 2.17.0
|
58 |
-
- Tokenizers 0.15.2
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
55 |
- Transformers 4.38.0.dev0
|
56 |
- Pytorch 2.1.0+cu121
|
57 |
- Datasets 2.17.0
|
58 |
+
- Tokenizers 0.15.2
|
59 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
60 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Menouar__saqr-7b-beta)
|
61 |
+
|
62 |
+
| Metric |Value|
|
63 |
+
|---------------------------------|----:|
|
64 |
+
|Avg. |44.84|
|
65 |
+
|AI2 Reasoning Challenge (25-Shot)|47.78|
|
66 |
+
|HellaSwag (10-Shot) |77.61|
|
67 |
+
|MMLU (5-Shot) |25.80|
|
68 |
+
|TruthfulQA (0-shot) |39.38|
|
69 |
+
|Winogrande (5-shot) |70.56|
|
70 |
+
|GSM8k (5-shot) | 7.88|
|
71 |
+
|