File size: 1,602 Bytes
206b800 7338970 590f041 206b800 bf1884d 590f041 bf1884d 590f041 bf1884d 590f041 7338970 60a21a0 7338970 590f041 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 |
---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
- code
---
# Magic-Dolphin-7b
<img src="https://huggingface.co/InferenceIllusionist/Magic-Dolphin-7b/resolve/main/magic-dolphin.jfif" width="500"/>
A linear merge of dolphin-2.6-mistral-7b-dpo-laser, merlinite-7b, and Hyperion-1.5-Mistral-7B. These three models showed excellent acumen in technical topics so I wanted to see how they would behave together in a merge. Several different ratios were tested before this release, in the end a higher weighting for merlinite-7b helped smooth out some edges. This model is a test of how LAB tuning is impacted by merges with models leveraging DPO.
This was my first experiment with merging models so any feedback is greatly appreciated.
Uses Alpaca template.
<p align="center">
</p>
<b>Sample Question</b>
<img src="https://huggingface.co/InferenceIllusionist/Magic-Dolphin-7b/resolve/main/magic-dolphin.JPG" width="750"/>
## Merge Details
### Merge Method
This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* models/Hyperion-1.5-Mistral-7B
* models/dolphin-2.6-mistral-7b-dpo-laser
* models/merlinite-7b
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: models/dolphin-2.6-mistral-7b-dpo-laser
parameters:
weight: 1.0
- model: models/Hyperion-1.5-Mistral-7B
parameters:
weight: 0.3
- model: models/merlinite-7b
parameters:
weight: 0.5
merge_method: linear
dtype: float16
``` |