Adding Evaluation Results

#5
Files changed (1) hide show
  1. README.md +14 -0
README.md CHANGED
@@ -151,3 +151,17 @@ Qualitative evaluation suggests that Galpaca frequently outperforms LLaMA-based
151
  howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}},
152
  }
153
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
151
  howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}},
152
  }
153
  ```
154
+
155
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
156
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_GeorgiaTechResearchInstitute__galpaca-30b)
157
+
158
+ | Metric | Value |
159
+ |-----------------------|---------------------------|
160
+ | Avg. | 40.99 |
161
+ | ARC (25-shot) | 49.57 |
162
+ | HellaSwag (10-shot) | 58.2 |
163
+ | MMLU (5-shot) | 43.78 |
164
+ | TruthfulQA (0-shot) | 41.16 |
165
+ | Winogrande (5-shot) | 62.51 |
166
+ | GSM8K (5-shot) | 2.81 |
167
+ | DROP (3-shot) | 28.89 |