SAXON 0 I
This is an iteration of the SAXON 0 model, I am attempting to minimise the amount of finetuning I need to do by using mergekit to let SAXON 0 absorb models trained on Politics and specifically UK Laws etc, since I want the model to be good a those.
Merge Details
Merge Method
This model was merged using the TIES merge method using chaphi/SAXON-0 as a base.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
models:
- model: chaphi/SAXON-0
#no parameters necessary for base model
- model: EryriLabs/DeepSeek-R1-Distill-Llama-UK-Legislation-8B
parameters:
density: 0.5
weight: 0.5
- model: brianmatzelle/llama3.1-8b-instruct-political-subreddits
parameters:
density: 0.5
weight: 0.5
merge_method: ties
base_model: chaphi/SAXON-0
parameters:
normalize: false
int8_mask: true
dtype: float16
- Downloads last month
- 54
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for chaphi/SAXON-0-I
Merge model
this model