merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the SLERP merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: zelk12/MT3-Gen4-MAMM-gemma-2-9B
  - model: zelk12/MT3-Gen4-GBMUI-gemma-2-9B
merge_method: slerp
base_model: zelk12/MT3-Gen4-MAMM-gemma-2-9B
dtype: bfloat16
parameters:
  t: 0.25

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 34.49
IFEval (0-Shot) 77.37
BBH (3-Shot) 43.78
MATH Lvl 5 (4-Shot) 20.47
GPQA (0-shot) 12.98
MuSR (0-shot) 14.72
MMLU-PRO (5-shot) 37.64
Downloads last month
35
Safetensors
Model size
10.2B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for zelk12/MT3-Gen4-gemma-2-9B

Merge model
this model
Finetunes
3 models
Merges
13 models
Quantizations
2 models

Evaluation results