Adding Evaluation Results

#4
Files changed (1) hide show
  1. README.md +14 -0
README.md CHANGED
@@ -34,3 +34,17 @@ Not using the format will make the model perform significantly worse than intend
34
  **Support My Development of New Models**
35
  <a href='https://ko-fi.com/Q5Q6MB734' target='_blank'><img height='36' style='border:0px;height:36px;'
36
  src='https://storage.ko-fi.com/cdn/kofi1.png?v=3' border='0' alt='Support Development' /></a>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
34
  **Support My Development of New Models**
35
  <a href='https://ko-fi.com/Q5Q6MB734' target='_blank'><img height='36' style='border:0px;height:36px;'
36
  src='https://storage.ko-fi.com/cdn/kofi1.png?v=3' border='0' alt='Support Development' /></a>
37
+
38
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
39
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_elinas__chronos-13b-v2)
40
+
41
+ | Metric | Value |
42
+ |-----------------------|---------------------------|
43
+ | Avg. | 48.32 |
44
+ | ARC (25-shot) | 58.7 |
45
+ | HellaSwag (10-shot) | 82.52 |
46
+ | MMLU (5-shot) | 53.39 |
47
+ | TruthfulQA (0-shot) | 50.55 |
48
+ | Winogrande (5-shot) | 75.06 |
49
+ | GSM8K (5-shot) | 11.3 |
50
+ | DROP (3-shot) | 6.74 |