grimjim commited on
Commit
7775c90
1 Parent(s): bb2b1ba

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +49 -50
README.md CHANGED
@@ -1,50 +1,49 @@
1
- ---
2
- base_model:
3
- - Nitral-AI/Poppy_Porpoise-0.72-L3-8B
4
- - Sao10K/L3-8B-Stheno-v3.2
5
- library_name: transformers
6
- tags:
7
- - mergekit
8
- - merge
9
- pipeline_tag: text-generation
10
- license: llama3
11
- license_link: LICENSE
12
- ---
13
- # llama-3-sthenic-porpoise-v1-8B
14
-
15
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
16
-
17
- This model is a straightforward SLERP merge of two popular models.
18
-
19
- Built with Meta Llama 3.
20
-
21
- ## Merge Details
22
- ### Merge Method
23
-
24
- This model was merged using the SLERP merge method.
25
-
26
- ### Models Merged
27
-
28
- The following models were included in the merge:
29
- * [Nitral-AI/Poppy_Porpoise-0.72-L3-8B](https://huggingface.co/Nitral-AI/Poppy_Porpoise-0.72-L3-8B)
30
- * [Sao10K/L3-8B-Stheno-v3.2](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2)
31
-
32
- ### Configuration
33
-
34
- The following YAML configuration was used to produce this model:
35
-
36
- ```yaml
37
- slices:
38
- - sources:
39
- - model: Nitral-AI/Poppy_Porpoise-0.72-L3-8B
40
- layer_range: [0,32]
41
- - model: Sao10K/L3-8B-Stheno-v3.2
42
- layer_range: [0,32]
43
- merge_method: slerp
44
- base_model: Nitral-AI/Poppy_Porpoise-0.72-L3-8B
45
- parameters:
46
- t:
47
- - value: 0.5
48
- dtype: bfloat16
49
-
50
- ```
 
1
+ ---
2
+ base_model:
3
+ - Nitral-AI/Poppy_Porpoise-0.72-L3-8B
4
+ - Sao10K/L3-8B-Stheno-v3.2
5
+ library_name: transformers
6
+ tags:
7
+ - mergekit
8
+ - merge
9
+ pipeline_tag: text-generation
10
+ license: cc-by-nc-4.0
11
+ ---
12
+ # llama-3-sthenic-porpoise-v1-8B
13
+
14
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
15
+
16
+ This model is a straightforward SLERP merge of two popular models.
17
+
18
+ Built with Meta Llama 3.
19
+
20
+ ## Merge Details
21
+ ### Merge Method
22
+
23
+ This model was merged using the SLERP merge method.
24
+
25
+ ### Models Merged
26
+
27
+ The following models were included in the merge:
28
+ * [Nitral-AI/Poppy_Porpoise-0.72-L3-8B](https://huggingface.co/Nitral-AI/Poppy_Porpoise-0.72-L3-8B)
29
+ * [Sao10K/L3-8B-Stheno-v3.2](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2)
30
+
31
+ ### Configuration
32
+
33
+ The following YAML configuration was used to produce this model:
34
+
35
+ ```yaml
36
+ slices:
37
+ - sources:
38
+ - model: Nitral-AI/Poppy_Porpoise-0.72-L3-8B
39
+ layer_range: [0,32]
40
+ - model: Sao10K/L3-8B-Stheno-v3.2
41
+ layer_range: [0,32]
42
+ merge_method: slerp
43
+ base_model: Nitral-AI/Poppy_Porpoise-0.72-L3-8B
44
+ parameters:
45
+ t:
46
+ - value: 0.5
47
+ dtype: bfloat16
48
+
49
+ ```