InferenceIllusionist commited on
Commit
590f041
1 Parent(s): 4b08ea6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -4
README.md CHANGED
@@ -4,18 +4,19 @@ library_name: transformers
4
  tags:
5
  - mergekit
6
  - merge
7
-
8
  ---
9
  # Magic-Dolphin-7b
10
- Magic-Dolphin-7b
11
-
12
  A linear merge of dolphin-2.6-mistral-7b-dpo-laser, merlinite-7b, and Hyperion-1.5-Mistral-7B. These three models showed excellent acumen in technical topics so I wanted to see how they would behave together in a merge. Several different ratios were tested before this release, in the end a higher weighting for merlinite-7b helped smooth out some edges. This model is a test of how LAB tuning is impacted by merges with models leveraging DPO.
13
 
14
  This was my first experiment with merging models so any feedback is greatly appreciated.
15
 
16
  Uses Alpaca template.
17
 
 
18
 
 
19
 
20
 
21
  ## Merge Details
@@ -48,4 +49,4 @@ models:
48
  merge_method: linear
49
  dtype: float16
50
 
51
- ```
 
4
  tags:
5
  - mergekit
6
  - merge
7
+ - code
8
  ---
9
  # Magic-Dolphin-7b
10
+ <img src="https://huggingface.co/InferenceIllusionist/Magic-Dolphin-7b/resolve/main/magic-dolphin.jfif" width="500"/>
 
11
  A linear merge of dolphin-2.6-mistral-7b-dpo-laser, merlinite-7b, and Hyperion-1.5-Mistral-7B. These three models showed excellent acumen in technical topics so I wanted to see how they would behave together in a merge. Several different ratios were tested before this release, in the end a higher weighting for merlinite-7b helped smooth out some edges. This model is a test of how LAB tuning is impacted by merges with models leveraging DPO.
12
 
13
  This was my first experiment with merging models so any feedback is greatly appreciated.
14
 
15
  Uses Alpaca template.
16
 
17
+ <p align="center">
18
 
19
+ </p>
20
 
21
 
22
  ## Merge Details
 
49
  merge_method: linear
50
  dtype: float16
51
 
52
+ ```