Khetterman commited on
Commit
ede79a9
1 Parent(s): eb15f2b

Create README.md

Browse files

![CursedMatrixLogo256.png](https://cdn-uploads.huggingface.co/production/uploads/673125091920e70ac26c8a2e/8TFyICKPCNowo3jf3Q7y2.png)

Files changed (1) hide show
  1. README.md +406 -0
README.md ADDED
@@ -0,0 +1,406 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - bluuwhale/L3-SthenoMaidBlackroot-8B-V1
4
+ - bunnycore/Llama-3.1-8B-OmniMatrix
5
+ - bunnycore/Llama-3.1-8B-TitanFusion-Mix-2.1
6
+ - Casual-Autopsy/L3-Super-Nova-RP-8B
7
+ - Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B
8
+ - d0rj/Llama-3-8B-saiga-suzume-ties
9
+ - DavidAU/L3.1-Dark-Planet-SpinFire-Uncensored-8B
10
+ - DreadPoor/CoolerCoder-8B-Model_Stock
11
+ - DreadPoor/L3.1-BaeZel-8B-Della
12
+ - DreadPoor/Trinas_Nectar-8B-model_stock
13
+ - IlyaGusev/saiga_llama3_8b
14
+ - invisietch/EtherealRainbow-v0.3-8B
15
+ - invisietch/L3.1-EtherealRainbow-v1.0-rc1-8B
16
+ - jeiku/Chaos_RP_l3_8B
17
+ - mlabonne/Daredevil-8B
18
+ - MrRobotoAI/Loki-.Epic_Fiction.-8b
19
+ - PJMixers/LLaMa-3-CursedStock-v2.0-8B
20
+ - ResplendentAI/Nymph_8B
21
+ - rityak/L3.1-DarkStock-8B
22
+ - saishf/Neural-SOVLish-Devil-8B-L3
23
+ - saishf/SOVL-Mega-Mash-V2-L3-8B
24
+ - sethuiyer/Dr.Samantha-8B
25
+ - SicariusSicariiStuff/LLAMA-3_8B_Unaligned_BETA
26
+ - v000000/L3-8B-BlueSerpentine
27
+ - v000000/L3.1-Storniitova-8B
28
+ - win10/ArliAI-RPMax-v1.3-merge-8B
29
+ - ZeroXClem/Llama-3-8B-ProLong-SAO-Roleplay-512k
30
+ library_name: transformers
31
+ tags:
32
+ - mergekit
33
+ - merge
34
+ - bfloat16
35
+ - safetensors
36
+ - 8b
37
+ - chat
38
+ - creative
39
+ - roleplay
40
+ - conversational
41
+ - not-for-all-audiences
42
+ language:
43
+ - en
44
+ - ru
45
+
46
+ ---
47
+ # CursedMatrix-8B-v9
48
+ >*The long journey from despair to acceptable perfection*
49
+
50
+ ![CursedMatrixLogo256.png](https://cdn-uploads.huggingface.co/production/uploads/673125091920e70ac26c8a2e/8TFyICKPCNowo3jf3Q7y2.png)
51
+
52
+ This is an interesting merge of **27 cool models**, created using [mergekit](https://github.com/arcee-ai/mergekit).
53
+ Enjoy exploring :)
54
+
55
+ ## Merge Details
56
+ ### Method
57
+
58
+ This model was merged using the multistep process and remerge with some model variations for best result.
59
+
60
+ ### Models
61
+
62
+ The following models were included in the merge:
63
+
64
+ * [bluuwhale/L3-SthenoMaidBlackroot-8B-V1](https://huggingface.co/bluuwhale/L3-SthenoMaidBlackroot-8B-V1)
65
+ * [bunnycore/Llama-3.1-8B-OmniMatrix](https://huggingface.co/bunnycore/Llama-3.1-8B-OmniMatrix)
66
+ * [bunnycore/Llama-3.1-8B-TitanFusion-Mix-2.1](https://huggingface.co/bunnycore/Llama-3.1-8B-TitanFusion-Mix-2.1)
67
+ * [Casual-Autopsy/L3-Super-Nova-RP-8B](https://huggingface.co/Casual-Autopsy/L3-Super-Nova-RP-8B)
68
+ * [Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B](https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B)
69
+ * [d0rj/Llama-3-8B-saiga-suzume-ties](https://huggingface.co/d0rj/Llama-3-8B-saiga-suzume-ties)
70
+ * [DavidAU/L3.1-Dark-Planet-SpinFire-Uncensored-8B](https://huggingface.co/DavidAU/L3.1-Dark-Planet-SpinFire-Uncensored-8B)
71
+ * [DreadPoor/CoolerCoder-8B-Model_Stock](https://huggingface.co/DreadPoor/CoolerCoder-8B-Model_Stock)
72
+ * [DreadPoor/L3.1-BaeZel-8B-Della](https://huggingface.co/DreadPoor/L3.1-BaeZel-8B-Della)
73
+ * [DreadPoor/Trinas_Nectar-8B-model_stock](https://huggingface.co/DreadPoor/Trinas_Nectar-8B-model_stock)
74
+ * [IlyaGusev/saiga_llama3_8b](https://huggingface.co/IlyaGusev/saiga_llama3_8b)
75
+ * [invisietch/EtherealRainbow-v0.3-8B](https://huggingface.co/invisietch/EtherealRainbow-v0.3-8B)
76
+ * [invisietch/L3.1-EtherealRainbow-v1.0-rc1-8B](https://huggingface.co/invisietch/L3.1-EtherealRainbow-v1.0-rc1-8B)
77
+ * [jeiku/Chaos_RP_l3_8B](https://huggingface.co/jeiku/Chaos_RP_l3_8B)
78
+ * [mlabonne/Daredevil-8B](https://huggingface.co/mlabonne/Daredevil-8B)
79
+ * [MrRobotoAI/Loki-.Epic_Fiction.-8b](https://huggingface.co/MrRobotoAI/Loki-.Epic_Fiction.-8b)
80
+ * [PJMixers/LLaMa-3-CursedStock-v2.0-8B](https://huggingface.co/PJMixers/LLaMa-3-CursedStock-v2.0-8B)
81
+ * [ResplendentAI/Nymph_8B](https://huggingface.co/ResplendentAI/Nymph_8B)
82
+ * [rityak/L3.1-DarkStock-8B](https://huggingface.co/rityak/L3.1-DarkStock-8B)
83
+ * [saishf/Neural-SOVLish-Devil-8B-L3](https://huggingface.co/saishf/Neural-SOVLish-Devil-8B-L3)
84
+ * [saishf/SOVL-Mega-Mash-V2-L3-8B](https://huggingface.co/saishf/SOVL-Mega-Mash-V2-L3-8B)
85
+ * [sethuiyer/Dr.Samantha-8B](https://huggingface.co/sethuiyer/Dr.Samantha-8B)
86
+ * [SicariusSicariiStuff/LLAMA-3_8B_Unaligned_BETA](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_BETA)
87
+ * [v000000/L3-8B-BlueSerpentine](https://huggingface.co/v000000/L3-8B-BlueSerpentine)
88
+ * [v000000/L3.1-Storniitova-8B](https://huggingface.co/v000000/L3.1-Storniitova-8B)
89
+ * [win10/ArliAI-RPMax-v1.3-merge-8B](https://huggingface.co/win10/ArliAI-RPMax-v1.3-merge-8B)
90
+ * [ZeroXClem/Llama-3-8B-ProLong-SAO-Roleplay-512k](https://huggingface.co/ZeroXClem/Llama-3-8B-ProLong-SAO-Roleplay-512k)
91
+
92
+ ### Configuration
93
+
94
+ The following YAML configurations was used to produce this model:
95
+
96
+ ```yaml
97
+ ### ::: Generation 1 merges :
98
+
99
+ # CursedMatrix-8B-v1
100
+ models:
101
+ - model: bunnycore/Llama-3.1-8B-OmniMatrix
102
+ parameters:
103
+ density: [0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1]
104
+ weight: [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]
105
+ - model: PJMixers/LLaMa-3-CursedStock-v2.0-8B
106
+ parameters:
107
+ density: [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]
108
+ weight: [0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1]
109
+ merge_method: ties
110
+ base_model: saishf/SOVL-Mega-Mash-V2-L3-8B
111
+ dtype: bfloat16
112
+
113
+ # TitanPlanet-8B-v1
114
+ models:
115
+ - model: bunnycore/Llama-3.1-8B-TitanFusion-Mix-2.1
116
+ parameters:
117
+ density: [0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1]
118
+ weight: [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]
119
+ - model: MrRobotoAI/Loki-.Epic_Fiction.-8b
120
+ parameters:
121
+ density: [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]
122
+ weight: [0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1]
123
+ merge_method: ties
124
+ base_model: DavidAU/L3.1-Dark-Planet-SpinFire-Uncensored-8B
125
+ dtype: bfloat16
126
+
127
+ # NeuralCoder-8B-v1
128
+ models:
129
+ - model: sethuiyer/Dr.Samantha-8B
130
+ parameters:
131
+ density: [0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1]
132
+ weight: [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]
133
+ - model: DreadPoor/CoolerCoder-8B-Model_Stock
134
+ parameters:
135
+ density: [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]
136
+ weight: [0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1]
137
+ merge_method: ties
138
+ base_model: saishf/Neural-SOVLish-Devil-8B-L3
139
+ dtype: bfloat16
140
+
141
+ # EtherealNymph-8B-v1
142
+ models:
143
+ - model: invisietch/EtherealRainbow-v0.3-8B
144
+ - model: ResplendentAI/Nymph_8B
145
+ merge_method: slerp
146
+ base_model: invisietch/EtherealRainbow-v0.3-8B
147
+ dtype: bfloat16
148
+ parameters:
149
+ t: [0.5, 0.6, 0.4, 0.7, 0.3, 0.8, 0.2, 0.9, 0.1, 0.9, 0.1, 0.9, 0.1, 0.8, 0.2, 0.7, 0.3, 0.6, 0.4, 0.5]
150
+
151
+ # UmbralDevil-8B-v1
152
+ models:
153
+ - model: Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B
154
+ - model: mlabonne/Daredevil-8B
155
+ merge_method: slerp
156
+ base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B
157
+ dtype: bfloat16
158
+ parameters:
159
+ t: [0.5, 0.6, 0.4, 0.7, 0.3, 0.8, 0.2, 0.9, 0.1, 0.9, 0.1, 0.9, 0.1, 0.8, 0.2, 0.7, 0.3, 0.6, 0.4, 0.5]
160
+
161
+ # EvilMind-8B-v1
162
+ models:
163
+ - model: mlabonne/Daredevil-8B
164
+ parameters:
165
+ weight: [1.0, 0.3, 0.1, 0.0]
166
+ density: [0.7, 0.2]
167
+ - model: Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B
168
+ parameters:
169
+ weight: [0.1, 0.9, 0.1]
170
+ density: 0.5
171
+ - model: invisietch/EtherealRainbow-v0.3-8B
172
+ parameters:
173
+ weight: [0.0, 0.1, 0.3, 1.0]
174
+ density: [0.2, 0.7]
175
+ merge_method: della_linear
176
+ parameters:
177
+ epsilon: 0.15
178
+ lambda: 1
179
+ base_model: ResplendentAI/Nymph_8B
180
+ dtype: bfloat16
181
+
182
+ ### ::: Generation 2 merges :
183
+
184
+ # DevilMind-8B-v1
185
+ models:
186
+ - model: F:/EvilMind-8B-v1
187
+ - model: F:/UmbralDevil-8B-v1
188
+ merge_method: slerp
189
+ base_model: F:/EvilMind-8B-v1
190
+ dtype: bfloat16
191
+ parameters:
192
+ t: [0.5, 0.6, 0.4, 0.7, 0.3, 0.8, 0.2, 0.9, 0.1, 0.9, 0.1, 0.9, 0.1, 0.8, 0.2, 0.7, 0.3, 0.6, 0.4, 0.5]
193
+
194
+ # TitanNymph-8B-v1
195
+ models:
196
+ - model: F:/TitanPlanet-8B-v1
197
+ - model: F:/EtherealNymph-8B-v1
198
+ merge_method: slerp
199
+ base_model: F:/TitanPlanet-8B-v1
200
+ dtype: bfloat16
201
+ parameters:
202
+ t: [0.5, 0.6, 0.4, 0.7, 0.3, 0.8, 0.2, 0.9, 0.1, 0.9, 0.1, 0.9, 0.1, 0.8, 0.2, 0.7, 0.3, 0.6, 0.4, 0.5]
203
+
204
+ # CursedMatrix-8B-v2
205
+ models:
206
+ - model: F:/TitanPlanet-8B-v1
207
+ parameters:
208
+ density: [0.5, 0.4, 0.6, 0.3, 0.7, 0.2, 0.8, 0.1, 0.9, 0.1, 0.9, 0.1, 0.9, 0.1, 0.9, 0.1, 0.9, 0.2, 0.8, 0.3, 0.7, 0.4, 0.6, 0.5]
209
+ weight: [0.5, 0.6, 0.4, 0.7, 0.3, 0.8, 0.2, 0.9, 0.1, 0.9, 0.1, 0.9, 0.1, 0.9, 0.1, 0.9, 0.1, 0.8, 0.2, 0.7, 0.3, 0.6, 0.4, 0.5]
210
+ - model: F:/NeuralCoder-8B-v1
211
+ parameters:
212
+ density: [0.5, 0.6, 0.4, 0.7, 0.3, 0.8, 0.2, 0.9, 0.1, 0.9, 0.1, 0.9, 0.1, 0.9, 0.1, 0.9, 0.1, 0.8, 0.2, 0.7, 0.3, 0.6, 0.4, 0.5]
213
+ weight: [0.5, 0.4, 0.6, 0.3, 0.7, 0.2, 0.8, 0.1, 0.9, 0.1, 0.9, 0.1, 0.9, 0.1, 0.9, 0.1, 0.9, 0.2, 0.8, 0.3, 0.7, 0.4, 0.6, 0.5]
214
+ merge_method: dare_ties
215
+ base_model: F:/CursedMatrix-8B-v1
216
+ dtype: bfloat16
217
+
218
+ # CursedMatrix-8B-v3
219
+ models:
220
+ - model: F:/TitanPlanet-8B-v1
221
+ - model: F:/CursedMatrix-8B-v1
222
+ - model: F:/EtherealNymph-8B-v1
223
+ merge_method: model_stock
224
+ base_model: F:/CursedMatrix-8B-v2
225
+ dtype: bfloat16
226
+
227
+ ### ::: Generation 3 merges :
228
+
229
+ # CursedMatrix-8B-v4
230
+ models:
231
+ - model: F:/CursedMatrix-8B-v3
232
+ parameters:
233
+ weight: 0.8
234
+ - model: F:/TitanNymph-8B-v1
235
+ parameters:
236
+ weight: 0.4
237
+ - model: DreadPoor/Trinas_Nectar-8B-model_stock
238
+ parameters:
239
+ weight: 0.3
240
+ - model: F:/DevilMind-8B-v1
241
+ parameters:
242
+ weight: 0.2
243
+ merge_method: task_arithmetic
244
+ base_model: F:/CursedMatrix-8B-v3
245
+ dtype: bfloat16
246
+
247
+ # CursedMatrix-8B-v4-rev2
248
+ models:
249
+ - model: F:/CursedMatrix-8B-v3
250
+ parameters:
251
+ weight: 0.8
252
+ - model: F:/TitanNymph-8B-v1
253
+ parameters:
254
+ weight: 0.4
255
+ - model: DreadPoor/Trinas_Nectar-8B-model_stock
256
+ parameters:
257
+ weight: 0.3
258
+ - model: F:/DevilMind-8B-v1
259
+ parameters:
260
+ weight: 0.2
261
+ merge_method: task_arithmetic
262
+ base_model: F:/CursedMatrix-8B-v1
263
+ dtype: bfloat16
264
+
265
+ # CursedMatrix-8B-v5
266
+ models:
267
+ - model: F:/CursedMatrix-8B-v4-rev2
268
+ merge_method: slerp
269
+ base_model: F:/CursedMatrix-8B-v4
270
+ dtype: bfloat16
271
+ parameters:
272
+ t: [0.5, 0.6, 0.4, 0.7, 0.3, 0.8, 0.2, 0.8, 0.2, 0.7, 0.3, 0.6, 0.4, 0.5]
273
+
274
+ ### ::: Generation 4 merges :
275
+
276
+ # CursedMatrix-8B-v6
277
+ models:
278
+ - model: jeiku/Chaos_RP_l3_8B
279
+ - model: ZeroXClem/Llama-3-8B-ProLong-SAO-Roleplay-512k
280
+ merge_method: model_stock
281
+ base_model: F:/CursedMatrix-8B-v5
282
+ dtype: bfloat16
283
+
284
+ # CursedMatrix-8B-v6-rev2
285
+ models:
286
+ - model: win10/ArliAI-RPMax-v1.3-merge-8B
287
+ - model: v000000/L3.1-Storniitova-8B
288
+ - model: d0rj/Llama-3-8B-saiga-suzume-ties
289
+ merge_method: model_stock
290
+ base_model: F:/CursedMatrix-8B-v5
291
+ dtype: bfloat16
292
+
293
+ # CursedMatrix-8B-v6-rev3
294
+ models:
295
+ - model: Casual-Autopsy/L3-Super-Nova-RP-8B
296
+ - model: IlyaGusev/saiga_llama3_8b
297
+ merge_method: model_stock
298
+ base_model: F:/CursedMatrix-8B-v5
299
+ dtype: bfloat16
300
+
301
+ # CursedMatrix-8B-v7
302
+ models:
303
+ - model: F:/CursedMatrix-8B-v6
304
+ parameters:
305
+ weight: [0.5, 0.4, 0.6, 0.3, 0.7, 0.2, 0.8, 0.2, 0.8, 0.3, 0.7, 0.4, 0.6, 0.5]
306
+ density: [0.05, 0.25]
307
+ - model: F:/CursedMatrix-8B-v6-rev3
308
+ parameters:
309
+ weight: [0.5, 0.6, 0.4, 0.7, 0.3, 0.8, 0.2, 0.8, 0.2, 0.7, 0.3, 0.6, 0.4, 0.5]
310
+ density: [0.25, 0.05]
311
+ merge_method: ties
312
+ base_model: F:/CursedMatrix-8B-v6-rev2
313
+ dtype: bfloat16
314
+
315
+ # CursedMatrix-8B-v8
316
+ models:
317
+ - model: F:/CursedMatrix-8B-v6
318
+ - model: F:/CursedMatrix-8B-v6-rev2
319
+ - model: F:/CursedMatrix-8B-v6-rev3
320
+ merge_method: model_stock
321
+ base_model: F:/CursedMatrix-8B-v7
322
+ dtype: bfloat16
323
+
324
+ ### ::: Generation 5 merges :
325
+
326
+ # Cursed-DarkRainbow-8B-v1
327
+ models:
328
+ - model: invisietch/L3.1-EtherealRainbow-v1.0-rc1-8B
329
+ parameters:
330
+ weight: [0.5, 0.6, 0.4, 0.7, 0.3, 0.8, 0.2, 0.8, 0.2, 0.7, 0.3, 0.6, 0.4, 0.5]
331
+ density: [0.5, 0.4, 0.6, 0.3, 0.7, 0.2, 0.8, 0.2, 0.8, 0.3, 0.7, 0.4, 0.6, 0.5]
332
+ - model: rityak/L3.1-DarkStock-8B
333
+ parameters:
334
+ weight: [0.5, 0.4, 0.6, 0.3, 0.7, 0.2, 0.8, 0.2, 0.8, 0.3, 0.7, 0.4, 0.6, 0.5]
335
+ density: [0.5, 0.6, 0.4, 0.7, 0.3, 0.8, 0.2, 0.8, 0.2, 0.7, 0.3, 0.6, 0.4, 0.5]
336
+ merge_method: della
337
+ parameters:
338
+ epsilon: 0.123456789
339
+ lambda: 0.987654321
340
+ base_model: F:/CursedMatrix-8B-v8
341
+ dtype: bfloat16
342
+
343
+ # Cursed-BlueBaezel-8B-v1
344
+ models:
345
+ - model: v000000/L3-8B-BlueSerpentine
346
+ parameters:
347
+ weight: [0.5, 0.6, 0.4, 0.7, 0.3, 0.8, 0.2, 0.8, 0.2, 0.7, 0.3, 0.6, 0.4, 0.5]
348
+ density: [0.5, 0.4, 0.6, 0.3, 0.7, 0.2, 0.8, 0.2, 0.8, 0.3, 0.7, 0.4, 0.6, 0.5]
349
+ - model: DreadPoor/L3.1-BaeZel-8B-Della
350
+ parameters:
351
+ weight: [0.5, 0.4, 0.6, 0.3, 0.7, 0.2, 0.8, 0.2, 0.8, 0.3, 0.7, 0.4, 0.6, 0.5]
352
+ density: [0.5, 0.6, 0.4, 0.7, 0.3, 0.8, 0.2, 0.8, 0.2, 0.7, 0.3, 0.6, 0.4, 0.5]
353
+ merge_method: della
354
+ parameters:
355
+ epsilon: 0.123456789
356
+ lambda: 0.987654321
357
+ base_model: F:/CursedMatrix-8B-v8
358
+ dtype: bfloat16
359
+
360
+ # Cursed-SuzumeMaid-8B-v1
361
+ models:
362
+ - model: d0rj/Llama-3-8B-saiga-suzume-ties
363
+ parameters:
364
+ weight: [0.5, 0.6, 0.4, 0.7, 0.3, 0.8, 0.2, 0.8, 0.2, 0.7, 0.3, 0.6, 0.4, 0.5]
365
+ density: [0.5, 0.4, 0.6, 0.3, 0.7, 0.2, 0.8, 0.2, 0.8, 0.3, 0.7, 0.4, 0.6, 0.5]
366
+ - model: bluuwhale/L3-SthenoMaidBlackroot-8B-V1
367
+ parameters:
368
+ weight: [0.5, 0.4, 0.6, 0.3, 0.7, 0.2, 0.8, 0.2, 0.8, 0.3, 0.7, 0.4, 0.6, 0.5]
369
+ density: [0.5, 0.6, 0.4, 0.7, 0.3, 0.8, 0.2, 0.8, 0.2, 0.7, 0.3, 0.6, 0.4, 0.5]
370
+ merge_method: della
371
+ parameters:
372
+ epsilon: 0.123456789
373
+ lambda: 0.987654321
374
+ base_model: F:/CursedMatrix-8B-v8
375
+ dtype: bfloat16
376
+
377
+ # Cursed-UnalignedSaiga-8B-v1
378
+ models:
379
+ - model: SicariusSicariiStuff/LLAMA-3_8B_Unaligned_BETA
380
+ parameters:
381
+ weight: [0.5, 0.6, 0.4, 0.7, 0.3, 0.8, 0.2, 0.8, 0.2, 0.7, 0.3, 0.6, 0.4, 0.5]
382
+ density: [0.5, 0.4, 0.6, 0.3, 0.7, 0.2, 0.8, 0.2, 0.8, 0.3, 0.7, 0.4, 0.6, 0.5]
383
+ - model: IlyaGusev/saiga_llama3_8b
384
+ parameters:
385
+ weight: [0.5, 0.4, 0.6, 0.3, 0.7, 0.2, 0.8, 0.2, 0.8, 0.3, 0.7, 0.4, 0.6, 0.5]
386
+ density: [0.5, 0.6, 0.4, 0.7, 0.3, 0.8, 0.2, 0.8, 0.2, 0.7, 0.3, 0.6, 0.4, 0.5]
387
+ merge_method: della
388
+ parameters:
389
+ epsilon: 0.123456789
390
+ lambda: 0.987654321
391
+ base_model: F:/CursedMatrix-8B-v8
392
+ dtype: bfloat16
393
+
394
+ # CursedMatrix-8B-v9
395
+ # Final model...
396
+ models:
397
+ - model: F:/Cursed-UnalignedSaiga-8B-v1
398
+ - model: F:/Cursed-DarkRainbow-8B-v1
399
+ - model: F:/Cursed-BlueBaezel-8B-v1
400
+ - model: F:/Cursed-SuzumeMaid-8B-v1
401
+ merge_method: model_stock
402
+ base_model: F:/CursedMatrix-8B-v8
403
+ dtype: bfloat16
404
+ ```
405
+
406
+ >My thanks to the authors of the original models, your work is incredible. Have a good time 🖤