Khetterman commited on
Commit
1bd4dbf
·
verified ·
1 Parent(s): 303bde7

Create README.md

Browse files

![AbominationScienceLogo256.png](https://cdn-uploads.huggingface.co/production/uploads/673125091920e70ac26c8a2e/mrBCmxkidQ9KNQsRO_fOy.png)

Files changed (1) hide show
  1. README.md +151 -0
README.md ADDED
@@ -0,0 +1,151 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - Azazelle/MN-Halide-12b-v1.0
4
+ - benhaotang/nemo-math-science-philosophy-12B
5
+ - FallenMerick/MN-Chunky-Lotus-12B
6
+ - FallenMerick/MN-Violet-Lotus-12B
7
+ - GalrionSoftworks/Canidori-12B-v1
8
+ - GalrionSoftworks/Pleiades-12B-v1
9
+ - inflatebot/MN-12B-Mag-Mell-R1
10
+ - Nohobby/MN-12B-Siskin-v0.2
11
+ - ThijsL202/MadMix-Unleashed-12B
12
+ - Trappu/Abomination-merge-attempt-12B
13
+ - VongolaChouko/Starcannon-Unleashed-12B-v1.0
14
+ library_name: transformers
15
+ tags:
16
+ - mergekit
17
+ - merge
18
+ - bfloat16
19
+ - safetensors
20
+ - 12b
21
+ - chat
22
+ - creative
23
+ - roleplay
24
+ - conversational
25
+ - creative-writing
26
+ - not-for-all-audiences
27
+ language:
28
+ - en
29
+ - ru
30
+
31
+ ---
32
+ # AbominationScience-12B-v4
33
+ >*When the choice is not random*
34
+
35
+ ![AbominationScienceLogo256.png](https://cdn-uploads.huggingface.co/production/uploads/673125091920e70ac26c8a2e/mrBCmxkidQ9KNQsRO_fOy.png)
36
+
37
+ This is an interesting merge of **11 cool models**, created using [mergekit](https://github.com/arcee-ai/mergekit).
38
+ Enjoy exploring :)
39
+
40
+ ## Merge Details
41
+ ### Method
42
+
43
+ This model was merged using the multistep process and remerge with some model variations for best result.
44
+
45
+ ### Models
46
+
47
+ The following models were included in the merge:
48
+
49
+ * [Azazelle/MN-Halide-12b-v1.0](https://huggingface.co/Azazelle/MN-Halide-12b-v1.0)
50
+ * [benhaotang/nemo-math-science-philosophy-12B](https://huggingface.co/benhaotang/nemo-math-science-philosophy-12B)
51
+ * [FallenMerick/MN-Chunky-Lotus-12B](https://huggingface.co/FallenMerick/MN-Chunky-Lotus-12B)
52
+ * [FallenMerick/MN-Violet-Lotus-12B](https://huggingface.co/FallenMerick/MN-Violet-Lotus-12B)
53
+ * [GalrionSoftworks/Canidori-12B-v1](https://huggingface.co/GalrionSoftworks/Canidori-12B-v1)
54
+ * [GalrionSoftworks/Pleiades-12B-v1](https://huggingface.co/GalrionSoftworks/Pleiades-12B-v1)
55
+ * [inflatebot/MN-12B-Mag-Mell-R1](https://huggingface.co/inflatebot/MN-12B-Mag-Mell-R1)
56
+ * [Nohobby/MN-12B-Siskin-v0.2](https://huggingface.co/Nohobby/MN-12B-Siskin-v0.2)
57
+ * [ThijsL202/MadMix-Unleashed-12B](https://huggingface.co/ThijsL202/MadMix-Unleashed-12B)
58
+ * [Trappu/Abomination-merge-attempt-12B](https://huggingface.co/Trappu/Abomination-merge-attempt-12B)
59
+ * [VongolaChouko/Starcannon-Unleashed-12B-v1.0](https://huggingface.co/VongolaChouko/Starcannon-Unleashed-12B-v1.0)
60
+
61
+ ### Configuration
62
+
63
+ The following YAML configurations was used to produce this model:
64
+
65
+ ```yaml
66
+ # AbominationScience
67
+ # It's a good model, I used it as a base for this merge.
68
+ models:
69
+ - model: Trappu/Abomination-merge-attempt-12B
70
+ - model: benhaotang/nemo-math-science-philosophy-12B
71
+ merge_method: slerp
72
+ base_model: Trappu/Abomination-merge-attempt-12B
73
+ dtype: bfloat16
74
+ parameters:
75
+ t: [0.8, 0.2, 0.8, 0.2, 0.8, 0.2, 0.8]
76
+
77
+ # SCUMCL
78
+ models:
79
+ - model: VongolaChouko/Starcannon-Unleashed-12B-v1.0
80
+ - model: FallenMerick/MN-Chunky-Lotus-12B
81
+ merge_method: slerp
82
+ base_model: VongolaChouko/Starcannon-Unleashed-12B-v1.0
83
+ dtype: bfloat16
84
+ parameters:
85
+ t: [0.7, 0.3, 0.7, 0.3, 0.7, 0.3, 0.7]
86
+
87
+ # SISMMU
88
+ models:
89
+ - model: Nohobby/MN-12B-Siskin-v0.2
90
+ - model: ThijsL202/MadMix-Unleashed-12B
91
+ merge_method: slerp
92
+ base_model: Nohobby/MN-12B-Siskin-v0.2
93
+ dtype: bfloat16
94
+ parameters:
95
+ t: [0, 0.5, 1, 0.5, 0]
96
+
97
+ # PLECAD
98
+ models:
99
+ - model: GalrionSoftworks/Pleiades-12B-v1
100
+ - model: GalrionSoftworks/Canidori-12B-v1
101
+ merge_method: slerp
102
+ base_model: GalrionSoftworks/Pleiades-12B-v1
103
+ dtype: bfloat16
104
+ parameters:
105
+ t: [0.7, 0.3, 0.7, 0.3, 0.7, 0.3, 0.7]
106
+
107
+ # Positive-12B-v1 and Negative-12B-v1 are the basis of diversity for the base model.
108
+ # I've lost the exact config, but it was most likely a slerp like the one in SCUMCL/SISMMU/PLECAD.
109
+ # Positive-12B-v1 = SCUMCL + SISMMU.
110
+ # Negative-12B-v1 = PLECAD + AbominationScience.
111
+
112
+ # AbominationScience-12B-v2
113
+ models:
114
+ - model: F:/Positive-12B-v1
115
+ parameters:
116
+ density: [0.5, 0.4, 0.6, 0.3, 0.7, 0.2, 0.8, 0.1, 0.9, 0.1, 0.9, 0.1, 0.9, 0.1, 0.9, 0.2, 0.8, 0.3, 0.7, 0.4, 0.6, 0.5]
117
+ weight: [0.5, 0.6, 0.4, 0.7, 0.3, 0.8, 0.2, 0.9, 0.1, 0.9, 0.1, 0.9, 0.1, 0.9, 0.1, 0.8, 0.2, 0.7, 0.3, 0.6, 0.4, 0.5]
118
+ - model: F:/Negative-12B-v1
119
+ parameters:
120
+ density: [0.5, 0.6, 0.4, 0.7, 0.3, 0.8, 0.2, 0.9, 0.1, 0.9, 0.1, 0.9, 0.1, 0.9, 0.1, 0.8, 0.2, 0.7, 0.3, 0.6, 0.4, 0.5]
121
+ weight: [0.5, 0.4, 0.6, 0.3, 0.7, 0.2, 0.8, 0.1, 0.9, 0.1, 0.9, 0.1, 0.9, 0.1, 0.9, 0.2, 0.8, 0.3, 0.7, 0.4, 0.6, 0.5]
122
+ merge_method: dare_ties
123
+ base_model: F:/AbominationScience
124
+ dtype: bfloat16
125
+
126
+ # AbominationScience-12B-v3
127
+ # Della merge with a good base to form an interesting core
128
+ models:
129
+ - model: F:/AbominationScience
130
+ parameters:
131
+ weight: [0.5, 0.6, 0.4, 0.7, 0.3, 0.8, 0.2, 0.8, 0.2, 0.7, 0.3, 0.6, 0.4, 0.5]
132
+ density: [0.5, 0.4, 0.6, 0.3, 0.7, 0.2, 0.8, 0.2, 0.8, 0.3, 0.7, 0.4, 0.6, 0.5]
133
+ merge_method: della
134
+ parameters:
135
+ epsilon: 0.123456789
136
+ lambda: 0.987654321
137
+ base_model: F:/AbominationScience-12B-v2
138
+ dtype: bfloat16
139
+
140
+ # AbominationScience-12B-v4
141
+ # Final shift the model to three very good bases.
142
+ models:
143
+ - model: inflatebot/MN-12B-Mag-Mell-R1
144
+ - model: FallenMerick/MN-Violet-Lotus-12B
145
+ - model: Azazelle/MN-Halide-12b-v1.0
146
+ merge_method: model_stock
147
+ base_model: F:/AbominationScience-12B-v3
148
+ dtype: bfloat16
149
+ ```
150
+
151
+ >My thanks to the authors of the original models, your work is incredible. Have a good time 🖤