Silly RP
Collection
All of my Silly RP models
•
2 items
•
Updated
This is one of my first merges, just throwing random things at the wall to see what works. Seems really good in my limited testing, but I can't make any promisses of it's quality. Just try it yourself and give me feedback I guess, I am rather intrested to see what others think about it. I don't know what I am doing, I don't even know benifits of diffrent merge methods :P
base_model:
This is a merge of pre-trained language models created using mergekit.
This model was merged using the task arithmetic merge method using NeverSleep/Noromaid-7B-0.4-DPO as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
base_model: NeverSleep/Noromaid-7B-0.4-DPO
models:
- model: maywell/Synatra-7B-v0.3-RP
parameters:
weight: 0.2
- model: tavtav/eros-7b-test
parameters:
weight: 0.2
- model: cogbuji/Mr-Grammatology-clinical-problems-Mistral-7B-0.5
parameters:
weight: 0.2
- model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO
parameters:
weight: 0.2
merge_method: task_arithmetic
parameters:
weight: 0.17
dtype: float16
random_seed: 694201337567099116663322537