An experimental intermediate merge
This merge is intended as an intermediate merge for further merges. It's useable as-is, and doesn't show any glaring signs of broken behavior. I've included a Q8_0 gguf in the repo if anyone is curious to try it.
The intuition behind this merge is as follows:
We keep the top 65 percent of weight deltas from the WPO-HB fine tune at a (very near) 1.0 weight We 'flood fill' the remaining 45 percent of model weights with the SimPO weights. Because normalize is set to true, this results in a 1.0 weight from SimPO's deltas on any non-overlapping weights. In cases of overlap with the top 65 weights from WPO-HB, the relative weight contribution for SimPO is near zero. In cases of non-overlap, each model gets a 1.0 or very near 1.0 weight for the merge.
Per the mergekit docs:
weight - relative (or absolute if normalize=False) weighting of a given tensor
normalize - if true, the weights of all models contributing to a tensor will be normalized. Default behavior.
🤔 Assuming I've understood correctly, anyway. If anyone knows of a way to get model stats, ex, something like a diff that just spits out stats about the weight differences between two models, feel free to leave a comment in the discussion. I'd love to be able to verify exactly what's happened to the weights, and if it's matching my expectations 🤷
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the della merge method using google/gemma-2-9b-it as a base.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
models:
- model: google/gemma-2-9b-it
- model: wzhouad/gemma-2-9b-it-WPO-HB
parameters:
density: 0.65
weight: 1.00
- model: princeton-nlp/gemma-2-9b-it-SimPO
parameters:
density: 1.0
weight: 0.00001
merge_method: della
base_model: google/gemma-2-9b-it
parameters:
normalize: true
int8_mask: true
lambda: 1.0
epsilon: 0.1
dtype: float16
- Downloads last month
- 15