grimjim commited on
Commit
c64e364
1 Parent(s): 0714ee0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +54 -55
README.md CHANGED
@@ -1,55 +1,54 @@
1
- ---
2
- language:
3
- - en
4
- base_model:
5
- - meta-llama/Meta-Llama-3-8B-Instruct
6
- - ChaoticNeutrals/Poppy_Porpoise-0.72-L3-8B
7
- library_name: transformers
8
- tags:
9
- - meta
10
- - llama-3
11
- - pytorch
12
- - mergekit
13
- - merge
14
- license: llama3
15
- license_link: LICENSE
16
- pipeline_tag: text-generation
17
- ---
18
- # llama-3-merge-pp-instruct-8B
19
-
20
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
21
-
22
- Lightly tested at temperature=1.0, minP=0.02 with provisional Llama 3 Instruct prompts.
23
-
24
- Built with Meta Llama 3.
25
-
26
- ## Merge Details
27
- ### Merge Method
28
-
29
- This model was merged using the SLERP merge method.
30
-
31
- ### Models Merged
32
-
33
- The following models were included in the merge:
34
- * [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
35
- * [ChaoticNeutrals/Poppy_Porpoise-0.72-L3-8B](https://huggingface.co/ChaoticNeutrals/Poppy_Porpoise-0.72-L3-8B)
36
-
37
- ### Configuration
38
-
39
- The following YAML configuration was used to produce this model:
40
-
41
- ```yaml
42
- slices:
43
- - sources:
44
- - model: ChaoticNeutrals/Poppy_Porpoise-0.72-L3-8B
45
- layer_range: [0,32]
46
- - model: meta-llama/Meta-Llama-3-8B-Instruct
47
- layer_range: [0,32]
48
- merge_method: slerp
49
- base_model: ChaoticNeutrals/Poppy_Porpoise-0.72-L3-8B
50
- parameters:
51
- t:
52
- - value: 0.5
53
- dtype: bfloat16
54
-
55
- ```
 
1
+ ---
2
+ language:
3
+ - en
4
+ base_model:
5
+ - meta-llama/Meta-Llama-3-8B-Instruct
6
+ - ChaoticNeutrals/Poppy_Porpoise-0.72-L3-8B
7
+ library_name: transformers
8
+ tags:
9
+ - meta
10
+ - llama-3
11
+ - pytorch
12
+ - mergekit
13
+ - merge
14
+ license: cc-by-nc-4.0
15
+ pipeline_tag: text-generation
16
+ ---
17
+ # llama-3-merge-pp-instruct-8B
18
+
19
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
20
+
21
+ Lightly tested at temperature=1.0, minP=0.02 with provisional Llama 3 Instruct prompts.
22
+
23
+ Built with Meta Llama 3.
24
+
25
+ ## Merge Details
26
+ ### Merge Method
27
+
28
+ This model was merged using the SLERP merge method.
29
+
30
+ ### Models Merged
31
+
32
+ The following models were included in the merge:
33
+ * [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
34
+ * [ChaoticNeutrals/Poppy_Porpoise-0.72-L3-8B](https://huggingface.co/ChaoticNeutrals/Poppy_Porpoise-0.72-L3-8B)
35
+
36
+ ### Configuration
37
+
38
+ The following YAML configuration was used to produce this model:
39
+
40
+ ```yaml
41
+ slices:
42
+ - sources:
43
+ - model: ChaoticNeutrals/Poppy_Porpoise-0.72-L3-8B
44
+ layer_range: [0,32]
45
+ - model: meta-llama/Meta-Llama-3-8B-Instruct
46
+ layer_range: [0,32]
47
+ merge_method: slerp
48
+ base_model: ChaoticNeutrals/Poppy_Porpoise-0.72-L3-8B
49
+ parameters:
50
+ t:
51
+ - value: 0.5
52
+ dtype: bfloat16
53
+
54
+ ```