TeeZee commited on
Commit
b7f7b2b
1 Parent(s): e73f0ef

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -11
README.md CHANGED
@@ -131,16 +131,5 @@ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-le
131
  - this merge has best evaluation results, so it will be finetuned to 'recover' from the merge
132
  - finetunig will be done on 5-10% of openorca dataset and full DPO datasets used by SOLAR
133
  - v03 > v01 > v02 - based on average evaluation scores, removing 1/4 of total layers seems to be the correct way to scale DUS
134
- # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
135
- Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_TeeZee__GALAXY-XB-v.03)
136
 
137
- | Metric |Value|
138
- |---------------------------------|----:|
139
- |Avg. |63.37|
140
- |AI2 Reasoning Challenge (25-Shot)|61.77|
141
- |HellaSwag (10-Shot) |83.59|
142
- |MMLU (5-Shot) |64.55|
143
- |TruthfulQA (0-shot) |44.19|
144
- |Winogrande (5-shot) |81.06|
145
- |GSM8k (5-shot) |45.03|
146
 
 
131
  - this merge has best evaluation results, so it will be finetuned to 'recover' from the merge
132
  - finetunig will be done on 5-10% of openorca dataset and full DPO datasets used by SOLAR
133
  - v03 > v01 > v02 - based on average evaluation scores, removing 1/4 of total layers seems to be the correct way to scale DUS
 
 
134
 
 
 
 
 
 
 
 
 
 
135