leaderboard-pr-bot commited on
Commit
382d07c
1 Parent(s): ac42187

Adding Evaluation Results

Browse files

This is an automated PR created with https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr

The purpose of this PR is to add evaluation results from the Open LLM Leaderboard to your model card.

If you encounter any issues, please report them to https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr/discussions

Files changed (1) hide show
  1. README.md +14 -1
README.md CHANGED
@@ -5,4 +5,17 @@ tags:
5
  - text-generation-inference
6
  ---
7
 
8
- **NOTE: Get the new version here: https://huggingface.co/eachadea/vicuna-13b-1.1**
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  - text-generation-inference
6
  ---
7
 
8
+ **NOTE: Get the new version here: https://huggingface.co/eachadea/vicuna-13b-1.1**
9
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
10
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_eachadea__vicuna-13b)
11
+
12
+ | Metric | Value |
13
+ |-----------------------|---------------------------|
14
+ | Avg. | 45.7 |
15
+ | ARC (25-shot) | 51.71 |
16
+ | HellaSwag (10-shot) | 79.94 |
17
+ | MMLU (5-shot) | 50.84 |
18
+ | TruthfulQA (0-shot) | 52.68 |
19
+ | Winogrande (5-shot) | 71.03 |
20
+ | GSM8K (5-shot) | 7.58 |
21
+ | DROP (3-shot) | 6.1 |