Adding Evaluation Results

#1
Files changed (1) hide show
  1. README.md +14 -0
README.md CHANGED
@@ -376,3 +376,17 @@ I am purposingly leaving this license ambiguous (other than the fact you must co
376
  Your best bet is probably to avoid using this commercially due to the OpenAI API usage.
377
 
378
  Either way, by using this model, you agree to completely indemnify me.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
376
  Your best bet is probably to avoid using this commercially due to the OpenAI API usage.
377
 
378
  Either way, by using this model, you agree to completely indemnify me.
379
+
380
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
381
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_jondurbin__airoboros-l2-7b-2.2.1)
382
+
383
+ | Metric | Value |
384
+ |-----------------------|---------------------------|
385
+ | Avg. | 45.1 |
386
+ | ARC (25-shot) | 55.03 |
387
+ | HellaSwag (10-shot) | 80.06 |
388
+ | MMLU (5-shot) | 47.64 |
389
+ | TruthfulQA (0-shot) | 44.65 |
390
+ | Winogrande (5-shot) | 73.8 |
391
+ | GSM8K (5-shot) | 6.14 |
392
+ | DROP (3-shot) | 8.4 |