merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the Linear merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: Sao10K/L3-8B-Stheno-v3.2+kik41/lora-sarcasm-more-llama-3-8b-v2
    parameters:
      weight: 1.0
  - model: FuseAI/FuseChat-Llama-3.1-8B-SFT+kik41/lora-type-expository-llama-3-8b-v2
    parameters:
      weight: 1.0
merge_method: linear
normalize: false
int8_mask: true
dtype: bfloat16

Open LLM Leaderboard Evaluation Results

Detailed results can be found here! Summarized results can be found here!

Metric Value (%)
Average 27.83
IFEval (0-Shot) 71.80
BBH (3-Shot) 34.79
MATH Lvl 5 (4-Shot) 17.22
GPQA (0-shot) 4.03
MuSR (0-shot) 8.82
MMLU-PRO (5-shot) 30.33
Downloads last month
31
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for DreadPoor/Rusted_Platinum-8B-LINEAR

Evaluation results