merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the Linear merge method.
Models Merged
The following models were included in the merge:
- DreadPoor/Rusted_Gold-8B-LINEAR
- DreadPoor/H_the_eighth-8B-LINEAR
- DreadPoor/Spring_Dusk-8B-SCE
- DreadPoor/Aurora_faustus-8B-LINEAR
Configuration
The following YAML configuration was used to produce this model:
models:
- model: DreadPoor/H_the_eighth-8B-LINEAR
parameters:
weight: 1.0
- model: DreadPoor/Spring_Dusk-8B-SCE
parameters:
weight: 1.0
- model: DreadPoor/Rusted_Gold-8B-LINEAR
parameters:
weight: 1.0
- model: DreadPoor/Aurora_faustus-8B-LINEAR
parameters:
weight: 1.0
merge_method: linear
normalize: false
int8_mask: true
dtype: bfloat16
Open LLM Leaderboard Evaluation Results
Detailed results can be found here! Summarized results can be found here!
Metric | Value (%) |
---|---|
Average | 28.79 |
IFEval (0-Shot) | 73.48 |
BBH (3-Shot) | 36.06 |
MATH Lvl 5 (4-Shot) | 15.86 |
GPQA (0-shot) | 6.04 |
MuSR (0-shot) | 10.30 |
MMLU-PRO (5-shot) | 30.98 |
- Downloads last month
- 10
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for DreadPoor/inexpertus_1.2-8B-LINEAR
Merge model
this model
Evaluation results
- averaged accuracy on IFEval (0-Shot)Open LLM Leaderboard73.480
- normalized accuracy on BBH (3-Shot)test set Open LLM Leaderboard36.060
- exact match on MATH Lvl 5 (4-Shot)test set Open LLM Leaderboard15.860
- acc_norm on GPQA (0-shot)Open LLM Leaderboard6.040
- acc_norm on MuSR (0-shot)Open LLM Leaderboard10.300
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard30.980