Triangle104 commited on
Commit
ae2e60e
1 Parent(s): 36db20f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +106 -0
README.md CHANGED
@@ -15,6 +15,112 @@ base_model: Nohobby/MS-Schisandra-22B-v0.3
15
  This model was converted to GGUF format from [`Nohobby/MS-Schisandra-22B-v0.3`](https://huggingface.co/Nohobby/MS-Schisandra-22B-v0.3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
16
  Refer to the [original model card](https://huggingface.co/Nohobby/MS-Schisandra-22B-v0.3) for more details on the model.
17
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
  ## Use with llama.cpp
19
  Install llama.cpp through brew (works on Mac and Linux)
20
 
 
15
  This model was converted to GGUF format from [`Nohobby/MS-Schisandra-22B-v0.3`](https://huggingface.co/Nohobby/MS-Schisandra-22B-v0.3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
16
  Refer to the [original model card](https://huggingface.co/Nohobby/MS-Schisandra-22B-v0.3) for more details on the model.
17
 
18
+ Merge Details
19
+ -
20
+ Merging steps
21
+
22
+ Karasik-v0.3
23
+
24
+ models:
25
+ - model: Mistral-Small-22B-ArliAI-RPMax-v1.1
26
+ parameters:
27
+ weight: [0.2, 0.3, 0.2, 0.3, 0.2]
28
+ density: [0.45, 0.55, 0.45, 0.55, 0.45]
29
+ - model: Mistral-Small-NovusKyver
30
+ parameters:
31
+ weight: [0.01768, -0.01675, 0.01285, -0.01696, 0.01421]
32
+ density: [0.6, 0.4, 0.5, 0.4, 0.6]
33
+ - model: MiS-Firefly-v0.2-22B
34
+ parameters:
35
+ weight: [0.208, 0.139, 0.139, 0.139, 0.208]
36
+ density: [0.7]
37
+ - model: magnum-v4-22b
38
+ parameters:
39
+ weight: [0.33]
40
+ density: [0.45, 0.55, 0.45, 0.55, 0.45]
41
+ merge_method: della_linear
42
+ base_model: Mistral-Small-22B-ArliAI-RPMax-v1.1
43
+ parameters:
44
+ epsilon: 0.05
45
+ lambda: 1.05
46
+ int8_mask: true
47
+ rescale: true
48
+ normalize: false
49
+ dtype: bfloat16
50
+ tokenizer_source: base
51
+
52
+ SchisandraVA3
53
+
54
+ (Config taken from here)
55
+
56
+ merge_method: della_linear
57
+ dtype: bfloat16
58
+ parameters:
59
+ normalize: true
60
+ int8_mask: true
61
+ tokenizer_source: base
62
+ base_model: Cydonia-22B-v1.3
63
+ models:
64
+ - model: Karasik03
65
+ parameters:
66
+ density: 0.55
67
+ weight: 1
68
+ - model: Pantheon-RP-Pure-1.6.2-22b-Small
69
+ parameters:
70
+ density: 0.55
71
+ weight: 1
72
+ - model: ChatWaifu_v2.0_22B
73
+ parameters:
74
+ density: 0.55
75
+ weight: 1
76
+ - model: MS-Meadowlark-Alt-22B
77
+ parameters:
78
+ density: 0.55
79
+ weight: 1
80
+ - model: SorcererLM-22B
81
+ parameters:
82
+ density: 0.55
83
+ weight: 1
84
+
85
+ Schisandra-v0.3
86
+
87
+ dtype: bfloat16
88
+ tokenizer_source: base
89
+ merge_method: della_linear
90
+ parameters:
91
+ density: 0.5
92
+ base_model: SchisandraVA3
93
+ models:
94
+ - model: unsloth/Mistral-Small-Instruct-2409
95
+ parameters:
96
+ weight:
97
+ - filter: v_proj
98
+ value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
99
+ - filter: o_proj
100
+ value: [1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1]
101
+ - filter: up_proj
102
+ value: [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
103
+ - filter: gate_proj
104
+ value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
105
+ - filter: down_proj
106
+ value: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
107
+ - value: 0
108
+ - model: SchisandraVA3
109
+ parameters:
110
+ weight:
111
+ - filter: v_proj
112
+ value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
113
+ - filter: o_proj
114
+ value: [0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0]
115
+ - filter: up_proj
116
+ value: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
117
+ - filter: gate_proj
118
+ value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
119
+ - filter: down_proj
120
+ value: [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
121
+ - value: 1
122
+
123
+ ---
124
  ## Use with llama.cpp
125
  Install llama.cpp through brew (works on Mac and Linux)
126