Undi95 commited on
Commit
35471b2
·
1 Parent(s): 12a386f

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +106 -0
README.md ADDED
@@ -0,0 +1,106 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ ---
4
+
5
+ Replaced Zephyr by Airoboros 2.2 and OpenOrca by SynthIA in the mix, the reason why is to see if using merged Mistral models using all the same prompt format would be a better step or not.
6
+
7
+ ## Description
8
+
9
+ This repo contains fp16 files of Mistral-11B-SynthIAirOmniMix.
10
+
11
+ ## Model used
12
+ - [SynthIA-7B-v1.5](https://huggingface.co/migtissera/SynthIA-7B-v1.5)
13
+ - [Mistral-7B-v0.1-Open-Platypus](https://huggingface.co/akjindal53244/Mistral-7B-v0.1-Open-Platypus)
14
+ - [CollectiveCognition-v1.1-Mistral-7B](https://huggingface.co/teknium/CollectiveCognition-v1.1-Mistral-7B)
15
+ - [airoboros-mistral2.2-7b](https://huggingface.co/teknium/airoboros-mistral2.2-7b)
16
+
17
+ ## Prompt template
18
+
19
+ 3 out of 4 models use the same prompting format.
20
+
21
+ The best one should be this one, since Zephyr and OpenOrca is out of the merge:
22
+
23
+ ```
24
+ (SYSTEM: {context})
25
+ USER: {prompt}
26
+ ASSISTANT:
27
+ ```
28
+
29
+ But this one (maybe) work too:
30
+
31
+ ```
32
+ Below is an instruction that describes a task. Write a response that appropriately completes the request.
33
+
34
+ ### Instruction:
35
+ {prompt}
36
+
37
+ ### Response:
38
+
39
+ ```
40
+
41
+ ## The secret sauce
42
+
43
+ Mistral-11B-SynthIAOpenPlatypus :
44
+ ```
45
+ slices:
46
+ - sources:
47
+ - model: "/content/drive/MyDrive/SynthIA-7B-v1.5-bf16"
48
+ layer_range: [0, 24]
49
+ - sources:
50
+ - model: akjindal53244/Mistral-7B-v0.1-Open-Platypus
51
+ layer_range: [8, 32]
52
+ merge_method: passthrough
53
+ dtype: bfloat16
54
+ ```
55
+
56
+ Mistral-11B-CC-Airo :
57
+ ```
58
+ slices:
59
+ - sources:
60
+ - model: "/content/drive/MyDrive/CC-v1.1-7B-bf16"
61
+ layer_range: [0, 24]
62
+ - sources:
63
+ - model: "/content/drive/MyDrive/Mistral-7B-Airoboros-2.2-bf16"
64
+ layer_range: [8, 32]
65
+ merge_method: passthrough
66
+ dtype: bfloat16
67
+ ```
68
+
69
+ Mistral-11B-SynthIAirOmniMix :
70
+ ```
71
+ slices:
72
+ - sources:
73
+ - model: Mistral-11B-SynthIAOpenPlatypus
74
+ layer_range: [0, 48]
75
+ - model: Mistral-11B-CC-Airo
76
+ layer_range: [0, 48]
77
+ merge_method: slerp
78
+ base_model: Mistral-11B-OpenOrcaPlatypus
79
+ parameters:
80
+ t:
81
+ - filter: lm_head
82
+ value: [0.75]
83
+ - filter: embed_tokens
84
+ value: [0.75]
85
+ - filter: self_attn
86
+ value: [0.75, 0.25]
87
+ - filter: mlp
88
+ value: [0.25, 0.75]
89
+ - filter: layernorm
90
+ value: [0.5, 0.5]
91
+ - filter: modelnorm
92
+ value: [0.75]
93
+ - value: 0.5 # fallback for rest of tensors
94
+ dtype: bfloat16
95
+ ```
96
+ I use [mergekit](https://github.com/cg123/mergekit) for all the manipulation told here.
97
+
98
+ ## Some scoring I done myself
99
+
100
+ [Work in progress]
101
+
102
+ ## Others
103
+
104
+ Special thanks to Sushi, [Henky](https://github.com/KoboldAI/KoboldAI-Client) for the machine he give me for big task, and [Charles Goddard](https://github.com/cg123) for his amazing tool.
105
+
106
+ If you want to support me, you can [here](https://ko-fi.com/undiai).