Experimental and negative results
Collection
Models that didn't always quite work out, but may still be of interest.
•
9 items
•
Updated
This is a merge of pre-trained language models created using mergekit.
In theory, context length has been extended to 32K tokens. In practice? Degradation above 8K context length.
Tested with ChatML instruct prompts, temperature 1.0, and minP 0.01, but feel free to experiment.
This model was merged using the task arithmetic merge method using grimjim/Mistral-7B-Instruct-demi-merge-v0.2-7B as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
base_model: grimjim/Mistral-7B-Instruct-demi-merge-v0.2-7B
dtype: bfloat16
merge_method: task_arithmetic
slices:
- sources:
- layer_range: [0, 32]
model: grimjim/Mistral-7B-Instruct-demi-merge-v0.2-7B
- layer_range: [0, 32]
model: lucyknada/microsoft_WizardLM-2-7B
parameters:
weight: 1.00