This model is a Slerp Merge of cookinai/CatMacaroni-Slerp and mncai/mistral-7b-dpo-v5.
Evaluation Results
HuggingFace Leaderboard
Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
---|---|---|---|---|---|---|
73.1 | 69.62 | 87.09 | 64.81 | 62.82 | 81.45 | 72.78 |
The model did achieve an improvement in TruthfulQA over cookinai/CatMacaroni-Slerp
and GSM8K over mncai/mistral-7b-dpo-v5
which was the goal of the merge leading to an average score that was a better than both. It is unclear why the TruthfulQA metric
is still meaningfully lower than the base mncai/mistral-7b-dpo-v5
.
Training Details
.yaml file for mergekit
slices:
- sources:
- model: cookinai/CatMacaroni-Slerp
layer_range: [0, 32]
- model: mncai/mistral-7b-dpo-v5
layer_range: [0, 32]
merge_method: slerp
base_model: mncai/mistral-7b-dpo-v5
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
dtype: float16
Bias, Risks, and Limitations
The model has not been evaluated for safety and is only intended for research and experiments.
- Downloads last month
- 3,222
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.