Adding Evaluation Results

#1
Files changed (1) hide show
  1. README.md +14 -1
README.md CHANGED
@@ -37,4 +37,17 @@ Details on MorningStar's training data are unavailable. It was likely trained on
37
  ## Ethical Considerations
38
  - Large language models like MorningStar carry risks around bias, toxicity, and misinformation.
39
  - Model outputs should be monitored and filtered before use in real applications.
40
- - Avoid harmful or unethical prompts.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
37
  ## Ethical Considerations
38
  - Large language models like MorningStar carry risks around bias, toxicity, and misinformation.
39
  - Model outputs should be monitored and filtered before use in real applications.
40
+ - Avoid harmful or unethical prompts.
41
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
42
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_NewstaR__Morningstar-13b-hf)
43
+
44
+ | Metric | Value |
45
+ |-----------------------|---------------------------|
46
+ | Avg. | 50.48 |
47
+ | ARC (25-shot) | 59.04 |
48
+ | HellaSwag (10-shot) | 81.93 |
49
+ | MMLU (5-shot) | 54.63 |
50
+ | TruthfulQA (0-shot) | 44.12 |
51
+ | Winogrande (5-shot) | 74.51 |
52
+ | GSM8K (5-shot) | 15.24 |
53
+ | DROP (3-shot) | 23.87 |