merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the task arithmetic merge method using jpacifico/Chocolatine-14B-Instruct-DPO-v1.2 as a base.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
models:
- model: jpacifico/Chocolatine-14B-Instruct-DPO-v1.2
parameters:
weight: 1.0
- model: failspy/Phi-3-medium-4k-instruct-abliterated-v3
parameters:
weight: 1.0
merge_method: task_arithmetic
base_model: jpacifico/Chocolatine-14B-Instruct-DPO-v1.2
parameters:
normalize: true
dtype: float16
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 28.25 |
IFEval (0-Shot) | 47.13 |
BBH (3-Shot) | 44.08 |
MATH Lvl 5 (4-Shot) | 12.46 |
GPQA (0-shot) | 10.74 |
MuSR (0-shot) | 16.62 |
MMLU-PRO (5-shot) | 38.44 |
- Downloads last month
- 26
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for allknowingroger/Ph3task2-14B
Merge model
this model
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard47.130
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard44.080
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard12.460
- acc_norm on GPQA (0-shot)Open LLM Leaderboard10.740
- acc_norm on MuSR (0-shot)Open LLM Leaderboard16.620
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard38.440