Adding Evaluation Results
#2
by
leaderboard-pr-bot
- opened
README.md
CHANGED
@@ -103,4 +103,17 @@ Accuracy
|
|
103 |
|
104 |
## Model Card Authors and Contact
|
105 |
|
106 |
-
[Andron00e](https://github.com/Andron00e)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
103 |
|
104 |
## Model Card Authors and Contact
|
105 |
|
106 |
+
[Andron00e](https://github.com/Andron00e)
|
107 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
108 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Andron00e__YetAnother_Open-Llama-3B-LoRA-OpenOrca)
|
109 |
+
|
110 |
+
| Metric | Value |
|
111 |
+
|-----------------------|---------------------------|
|
112 |
+
| Avg. | 18.18 |
|
113 |
+
| ARC (25-shot) | 25.94 |
|
114 |
+
| HellaSwag (10-shot) | 25.76 |
|
115 |
+
| MMLU (5-shot) | 24.65 |
|
116 |
+
| TruthfulQA (0-shot) | 0.0 |
|
117 |
+
| Winogrande (5-shot) | 50.83 |
|
118 |
+
| GSM8K (5-shot) | 0.0 |
|
119 |
+
| DROP (3-shot) | 0.04 |
|