Adding Evaluation Results

#4
Files changed (1) hide show
  1. README.md +14 -0
README.md CHANGED
@@ -587,3 +587,17 @@ I am purposingly leaving this license ambiguous (other than the fact you must co
587
  Your best bet is probably to avoid using this commercially due to the OpenAI API usage.
588
 
589
  Either way, by using this model, you agree to completely indemnify me.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
587
  Your best bet is probably to avoid using this commercially due to the OpenAI API usage.
588
 
589
  Either way, by using this model, you agree to completely indemnify me.
590
+
591
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
592
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_TheBloke__Airoboros-L2-70B-2.1-GPTQ)
593
+
594
+ | Metric | Value |
595
+ |-----------------------|---------------------------|
596
+ | Avg. | 61.76 |
597
+ | ARC (25-shot) | 70.39 |
598
+ | HellaSwag (10-shot) | 86.54 |
599
+ | MMLU (5-shot) | 68.89 |
600
+ | TruthfulQA (0-shot) | 55.55 |
601
+ | Winogrande (5-shot) | 81.61 |
602
+ | GSM8K (5-shot) | 15.24 |
603
+ | DROP (3-shot) | 54.1 |