leaderboard-pr-bot commited on
Commit
c7af953
1 Parent(s): 85c27af

Adding Evaluation Results

Browse files

This is an automated PR created with https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr

The purpose of this PR is to add evaluation results from the Open LLM Leaderboard to your model card.

If you encounter any issues, please report them to https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr/discussions

Files changed (1) hide show
  1. README.md +14 -1
README.md CHANGED
@@ -38,4 +38,17 @@ output = model.generate(input_ids, max_length = 1000, num_beams=1)
38
  output_text = tokenizer.decode(output[0], skip_special_tokens=True)
39
 
40
  # Print the generated text
41
- print(output_text)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
38
  output_text = tokenizer.decode(output[0], skip_special_tokens=True)
39
 
40
  # Print the generated text
41
+ print(output_text)
42
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
43
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_roneneldan__TinyStories-33M)
44
+
45
+ | Metric | Value |
46
+ |-----------------------|---------------------------|
47
+ | Avg. | 24.38 |
48
+ | ARC (25-shot) | 24.23 |
49
+ | HellaSwag (10-shot) | 25.69 |
50
+ | MMLU (5-shot) | 23.82 |
51
+ | TruthfulQA (0-shot) | 47.64 |
52
+ | Winogrande (5-shot) | 49.09 |
53
+ | GSM8K (5-shot) | 0.0 |
54
+ | DROP (3-shot) | 0.19 |