Model Card for Sina-Odin-7b-Merge
Part of a series of experimental DARE merges.
.yaml file for mergekit
models:
- model: Mihaiii/Metis-0.3
# no parameters necessary for base model
- model: rishiraj/smol-7b #75
parameters:
weight: 0.2
density: 0.41
- model: SanjiWatsuki/openchat-3.5-1210-starling-slerp #125
parameters:
weight: 0.33
density: 0.54
- model: Azazelle/Dumb-Maidlet #200
parameters:
weight: 0.53
density: 0.71
merge_method: dare_ties
base_model: Mihaiii/Metis-0.3
parameters:
int8_mask: true
dtype: bfloat16
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 47.82 |
AI2 Reasoning Challenge (25-Shot) | 52.82 |
HellaSwag (10-Shot) | 68.86 |
MMLU (5-Shot) | 45.54 |
TruthfulQA (0-shot) | 39.20 |
Winogrande (5-shot) | 72.22 |
GSM8k (5-shot) | 8.26 |
- Downloads last month
- 1,500
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for Azazelle/Sina-Odin-7b-Merge
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard52.820
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard68.860
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard45.540
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard39.200
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard72.220
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard8.260