Adding Evaluation Results
#1
by
leaderboard-pr-bot
- opened
README.md
CHANGED
@@ -92,4 +92,17 @@ language:
|
|
92 |
|
93 |
```shell
|
94 |
python infer.py --model_path tigerbot-70b-chat(-v1)
|
95 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
92 |
|
93 |
```shell
|
94 |
python infer.py --model_path tigerbot-70b-chat(-v1)
|
95 |
+
```
|
96 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
97 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_TigerResearch__tigerbot-70b-base)
|
98 |
+
|
99 |
+
| Metric | Value |
|
100 |
+
|-----------------------|---------------------------|
|
101 |
+
| Avg. | 62.1 |
|
102 |
+
| ARC (25-shot) | 62.46 |
|
103 |
+
| HellaSwag (10-shot) | 83.61 |
|
104 |
+
| MMLU (5-shot) | 65.49 |
|
105 |
+
| TruthfulQA (0-shot) | 52.76 |
|
106 |
+
| Winogrande (5-shot) | 80.19 |
|
107 |
+
| GSM8K (5-shot) | 37.76 |
|
108 |
+
| DROP (3-shot) | 52.45 |
|