icefog72 commited on
Commit
e8cb50b
1 Parent(s): 0ed93a6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +51 -48
README.md CHANGED
@@ -1,48 +1,51 @@
1
- ---
2
- base_model: []
3
- library_name: transformers
4
- tags:
5
- - mergekit
6
- - merge
7
-
8
- ---
9
- # IceSakeV4RP-7b
10
-
11
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
12
-
13
- ## Merge Details
14
- ### Merge Method
15
-
16
- This model was merged using the SLERP merge method.
17
-
18
- ### Models Merged
19
-
20
- The following models were included in the merge:
21
- * G:\FModels\IceSakeV3RP-7b
22
- * G:\FModels\IceSakeV2RP-7b
23
-
24
- ### Configuration
25
-
26
- The following YAML configuration was used to produce this model:
27
-
28
- ```yaml
29
- slices:
30
- - sources:
31
- - model: G:\FModels\IceSakeV3RP-7b
32
- layer_range: [0, 32]
33
- - model: G:\FModels\IceSakeV2RP-7b
34
- layer_range: [0, 32]
35
-
36
- merge_method: slerp
37
- base_model: G:\FModels\IceSakeV2RP-7b
38
- parameters:
39
- t:
40
- - filter: self_attn
41
- value: [0, 0.5, 0.3, 0.7, 1]
42
- - filter: mlp
43
- value: [1, 0.5, 0.7, 0.3, 0]
44
- - value: 0.5 # fallback for rest of tensors
45
- dtype: bfloat16
46
-
47
-
48
- ```
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ library_name: transformers
4
+ tags:
5
+ - mergekit
6
+ - merge
7
+ - alpaca
8
+ - mistral
9
+ - not-for-all-audiences
10
+ - nsfw
11
+ ---
12
+ # IceSakeV4RP-7b
13
+
14
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
15
+
16
+ ## Merge Details
17
+ ### Merge Method
18
+
19
+ This model was merged using the SLERP merge method.
20
+
21
+ ### Models Merged
22
+
23
+ The following models were included in the merge:
24
+ * G:\FModels\IceSakeV3RP-7b
25
+ * G:\FModels\IceSakeV2RP-7b
26
+
27
+ ### Configuration
28
+
29
+ The following YAML configuration was used to produce this model:
30
+
31
+ ```yaml
32
+ slices:
33
+ - sources:
34
+ - model: G:\FModels\IceSakeV3RP-7b
35
+ layer_range: [0, 32]
36
+ - model: G:\FModels\IceSakeV2RP-7b
37
+ layer_range: [0, 32]
38
+
39
+ merge_method: slerp
40
+ base_model: G:\FModels\IceSakeV2RP-7b
41
+ parameters:
42
+ t:
43
+ - filter: self_attn
44
+ value: [0, 0.5, 0.3, 0.7, 1]
45
+ - filter: mlp
46
+ value: [1, 0.5, 0.7, 0.3, 0]
47
+ - value: 0.5 # fallback for rest of tensors
48
+ dtype: bfloat16
49
+
50
+
51
+ ```