ProdeusUnity commited on
Commit
adc7dab
1 Parent(s): 2c20849

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +63 -46
README.md CHANGED
@@ -1,46 +1,63 @@
1
- ---
2
- base_model: []
3
- library_name: transformers
4
- tags:
5
- - mergekit
6
- - merge
7
-
8
- ---
9
- # MN-Prismatic-12b
10
-
11
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
12
-
13
- ## Merge Details
14
- ### Merge Method
15
-
16
- This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using /mistralai_Mistral-Nemo-Base-2407 as a base.
17
-
18
- ### Models Merged
19
-
20
- The following models were included in the merge:
21
- * /inflatebot_MN-12B-Mag-Mell-R1
22
- * /nbeerbower_Mistral-Nemo-Prism-12B-v5
23
-
24
- ### Configuration
25
-
26
- The following YAML configuration was used to produce this model:
27
-
28
- ```yaml
29
- models:
30
- - model: /inflatebot_MN-12B-Mag-Mell-R1
31
- parameters:
32
- weight: 0.3
33
- density: 0.5
34
- - model: /nbeerbower_Mistral-Nemo-Prism-12B-v5
35
- parameters:
36
- weight: 0.4
37
- density: 0.75
38
- base_model: /mistralai_Mistral-Nemo-Base-2407
39
- parameters:
40
- epsilon: 0.05
41
- normalize: true
42
- lambda: 1
43
- merge_method: ties
44
- dtype: bfloat16
45
-
46
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: []
3
+ library_name: transformers
4
+ tags:
5
+ - mergekit
6
+ - merge
7
+
8
+ ---
9
+ # Stellar Odyssey 12b v0.0
10
+ *The sparkling courage I longed for, what I got is small... My tears are surely the prism of tomorrow... Say "Hello!" to the ideal future, let's go see them~*
11
+
12
+ Listen to the song on youtube: https://www.youtube.com/watch?v=v3I6EVlyPx4
13
+
14
+ One off merge for a friend, though it came out rather good, I like it, so try it?
15
+
16
+ mistralai/Mistral-Nemo-Base-2407
17
+ inflatebot/MN-12b-Mag-Mell-R1
18
+ nbeerbower/Mistral-Nemo-Prism-12B-v5
19
+
20
+ License for this model Apache 2.0
21
+
22
+
23
+ Format: Mistral Tekken or ChatML
24
+
25
+ Thank you to AuriAetherwiing for helping me merge the models and for providing compute (A40).
26
+
27
+ Details
28
+
29
+
30
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
31
+
32
+ ## Merge Details
33
+ ### Merge Method
34
+
35
+ This model was merged using the della_linear merge method using C:\Users\lg911\Downloads\Mergekit-Fixed\mergekit\mistralai_Mistral-Nemo-Base-2407 as a base.
36
+
37
+ ### Models Merged
38
+
39
+ Models Merged
40
+ The following models were included in the merge:
41
+
42
+ /inflatebot_MN-12B-Mag-Mell-R1
43
+ /nbeerbower_Mistral-Nemo-Prism-12B-v5
44
+
45
+ #### Configuration
46
+ The following YAML configuration was used to produce this model:
47
+
48
+ models:
49
+ - model: /inflatebot_MN-12B-Mag-Mell-R1
50
+ parameters:
51
+ weight: 0.3
52
+ density: 0.5
53
+ - model: /nbeerbower_Mistral-Nemo-Prism-12B-v5
54
+ parameters:
55
+ weight: 0.4
56
+ density: 0.75
57
+ base_model: /mistralai_Mistral-Nemo-Base-2407
58
+ parameters:
59
+ epsilon: 0.05
60
+ normalize: true
61
+ lambda: 1
62
+ merge_method: ties
63
+ dtype: bfloat16