T145 commited on
Commit
05afbe3
1 Parent(s): 3887320

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +166 -166
README.md CHANGED
@@ -1,169 +1,169 @@
1
- ---
2
- base_model:
3
- - unsloth/Llama-3.1-Storm-8B
4
- - Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
5
- - unsloth/Meta-Llama-3.1-8B-Instruct
6
- - VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct
7
- - arcee-ai/Llama-3.1-SuperNova-Lite
8
- library_name: transformers
9
- tags:
10
- - mergekit
11
- - merge
12
- model-index:
13
- - name: ZEUS-8B-V17
14
- results:
15
- - task:
16
- type: text-generation
17
- name: Text Generation
18
- dataset:
19
- name: IFEval (0-Shot)
20
- type: wis-k/instruction-following-eval
21
- split: train
22
- args:
23
- num_few_shot: 0
24
- metrics:
25
- - type: inst_level_strict_acc and prompt_level_strict_acc
26
- value: 79.41
27
- name: averaged accuracy
28
- source:
29
- url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=T145%2FZEUS-8B-V17
30
- name: Open LLM Leaderboard
31
- - task:
32
- type: text-generation
33
- name: Text Generation
34
- dataset:
35
- name: BBH (3-Shot)
36
- type: SaylorTwift/bbh
37
- split: test
38
- args:
39
- num_few_shot: 3
40
- metrics:
41
- - type: acc_norm
42
- value: 32.34
43
- name: normalized accuracy
44
- source:
45
- url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=T145%2FZEUS-8B-V17
46
- name: Open LLM Leaderboard
47
- - task:
48
- type: text-generation
49
- name: Text Generation
50
- dataset:
51
- name: MATH Lvl 5 (4-Shot)
52
- type: lighteval/MATH-Hard
53
- split: test
54
- args:
55
- num_few_shot: 4
56
- metrics:
57
- - type: exact_match
58
- value: 21.15
59
- name: exact match
60
- source:
61
- url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=T145%2FZEUS-8B-V17
62
- name: Open LLM Leaderboard
63
- - task:
64
- type: text-generation
65
- name: Text Generation
66
- dataset:
67
- name: GPQA (0-shot)
68
- type: Idavidrein/gpqa
69
- split: train
70
- args:
71
- num_few_shot: 0
72
- metrics:
73
- - type: acc_norm
74
- value: 9.62
75
- name: acc_norm
76
- source:
77
- url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=T145%2FZEUS-8B-V17
78
- name: Open LLM Leaderboard
79
- - task:
80
- type: text-generation
81
- name: Text Generation
82
- dataset:
83
- name: MuSR (0-shot)
84
- type: TAUR-Lab/MuSR
85
- args:
86
- num_few_shot: 0
87
- metrics:
88
- - type: acc_norm
89
- value: 9.64
90
- name: acc_norm
91
- source:
92
- url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=T145%2FZEUS-8B-V17
93
- name: Open LLM Leaderboard
94
- - task:
95
- type: text-generation
96
- name: Text Generation
97
- dataset:
98
- name: MMLU-PRO (5-shot)
99
- type: TIGER-Lab/MMLU-Pro
100
- config: main
101
- split: test
102
- args:
103
- num_few_shot: 5
104
- metrics:
105
- - type: acc
106
- value: 32.61
107
- name: accuracy
108
- source:
109
- url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=T145%2FZEUS-8B-V17
110
- name: Open LLM Leaderboard
111
- ---
112
- # Untitled Model (1)
113
-
114
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
115
-
116
- ## Merge Details
117
- ### Merge Method
118
-
119
- This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [unsloth/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/unsloth/Meta-Llama-3.1-8B-Instruct) as a base.
120
-
121
- ### Models Merged
122
-
123
- The following models were included in the merge:
124
- * [unsloth/Llama-3.1-Storm-8B](https://huggingface.co/unsloth/Llama-3.1-Storm-8B)
125
- * [Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2](https://huggingface.co/Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2)
126
- * [VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct](https://huggingface.co/VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct)
127
- * [arcee-ai/Llama-3.1-SuperNova-Lite](https://huggingface.co/arcee-ai/Llama-3.1-SuperNova-Lite)
128
-
129
- ### Configuration
130
-
131
- The following YAML configuration was used to produce this model:
132
-
133
- ```yaml
134
- base_model: unsloth/Meta-Llama-3.1-8B-Instruct
135
- dtype: bfloat16
136
- merge_method: dare_ties
137
- parameters:
138
- int8_mask: 1.0
139
- normalize: 1.0
140
- random_seed: 145.0
141
- slices:
142
- - sources:
143
- - layer_range: [0, 32]
144
- model: unsloth/Llama-3.1-Storm-8B
145
- parameters:
146
- density: 0.95
147
- weight: 0.28
148
- - layer_range: [0, 32]
149
- model: arcee-ai/Llama-3.1-SuperNova-Lite
150
- parameters:
151
- density: 0.9
152
- weight: 0.27
153
- - layer_range: [0, 32]
154
- model: VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct
155
- parameters:
156
- density: 0.92
157
- weight: 0.25
158
- - layer_range: [0, 32]
159
- model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
160
- parameters:
161
- density: 0.92
162
- weight: 0.2
163
- - layer_range: [0, 32]
164
- model: unsloth/Meta-Llama-3.1-8B-Instruct
165
- tokenizer_source: union
166
- ```
167
 
168
  # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
169
  Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/T145__ZEUS-8B-V17-details)!
 
1
+ ---
2
+ base_model:
3
+ - unsloth/Llama-3.1-Storm-8B
4
+ - Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
5
+ - unsloth/Meta-Llama-3.1-8B-Instruct
6
+ - VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct
7
+ - arcee-ai/Llama-3.1-SuperNova-Lite
8
+ library_name: transformers
9
+ tags:
10
+ - mergekit
11
+ - merge
12
+ model-index:
13
+ - name: ZEUS-8B-V17
14
+ results:
15
+ - task:
16
+ type: text-generation
17
+ name: Text Generation
18
+ dataset:
19
+ name: IFEval (0-Shot)
20
+ type: wis-k/instruction-following-eval
21
+ split: train
22
+ args:
23
+ num_few_shot: 0
24
+ metrics:
25
+ - type: inst_level_strict_acc and prompt_level_strict_acc
26
+ value: 79.41
27
+ name: averaged accuracy
28
+ source:
29
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=T145%2FZEUS-8B-V17
30
+ name: Open LLM Leaderboard
31
+ - task:
32
+ type: text-generation
33
+ name: Text Generation
34
+ dataset:
35
+ name: BBH (3-Shot)
36
+ type: SaylorTwift/bbh
37
+ split: test
38
+ args:
39
+ num_few_shot: 3
40
+ metrics:
41
+ - type: acc_norm
42
+ value: 32.34
43
+ name: normalized accuracy
44
+ source:
45
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=T145%2FZEUS-8B-V17
46
+ name: Open LLM Leaderboard
47
+ - task:
48
+ type: text-generation
49
+ name: Text Generation
50
+ dataset:
51
+ name: MATH Lvl 5 (4-Shot)
52
+ type: lighteval/MATH-Hard
53
+ split: test
54
+ args:
55
+ num_few_shot: 4
56
+ metrics:
57
+ - type: exact_match
58
+ value: 21.15
59
+ name: exact match
60
+ source:
61
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=T145%2FZEUS-8B-V17
62
+ name: Open LLM Leaderboard
63
+ - task:
64
+ type: text-generation
65
+ name: Text Generation
66
+ dataset:
67
+ name: GPQA (0-shot)
68
+ type: Idavidrein/gpqa
69
+ split: train
70
+ args:
71
+ num_few_shot: 0
72
+ metrics:
73
+ - type: acc_norm
74
+ value: 9.62
75
+ name: acc_norm
76
+ source:
77
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=T145%2FZEUS-8B-V17
78
+ name: Open LLM Leaderboard
79
+ - task:
80
+ type: text-generation
81
+ name: Text Generation
82
+ dataset:
83
+ name: MuSR (0-shot)
84
+ type: TAUR-Lab/MuSR
85
+ args:
86
+ num_few_shot: 0
87
+ metrics:
88
+ - type: acc_norm
89
+ value: 9.64
90
+ name: acc_norm
91
+ source:
92
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=T145%2FZEUS-8B-V17
93
+ name: Open LLM Leaderboard
94
+ - task:
95
+ type: text-generation
96
+ name: Text Generation
97
+ dataset:
98
+ name: MMLU-PRO (5-shot)
99
+ type: TIGER-Lab/MMLU-Pro
100
+ config: main
101
+ split: test
102
+ args:
103
+ num_few_shot: 5
104
+ metrics:
105
+ - type: acc
106
+ value: 32.61
107
+ name: accuracy
108
+ source:
109
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=T145%2FZEUS-8B-V17
110
+ name: Open LLM Leaderboard
111
+ ---
112
+ # ZEUS 8B 🌩️ V17
113
+
114
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
115
+
116
+ ## Merge Details
117
+ ### Merge Method
118
+
119
+ This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [unsloth/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/unsloth/Meta-Llama-3.1-8B-Instruct) as a base.
120
+
121
+ ### Models Merged
122
+
123
+ The following models were included in the merge:
124
+ * [unsloth/Llama-3.1-Storm-8B](https://huggingface.co/unsloth/Llama-3.1-Storm-8B)
125
+ * [Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2](https://huggingface.co/Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2)
126
+ * [VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct](https://huggingface.co/VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct)
127
+ * [arcee-ai/Llama-3.1-SuperNova-Lite](https://huggingface.co/arcee-ai/Llama-3.1-SuperNova-Lite)
128
+
129
+ ### Configuration
130
+
131
+ The following YAML configuration was used to produce this model:
132
+
133
+ ```yaml
134
+ base_model: unsloth/Meta-Llama-3.1-8B-Instruct
135
+ dtype: bfloat16
136
+ merge_method: dare_ties
137
+ parameters:
138
+ int8_mask: 1.0
139
+ normalize: 1.0
140
+ random_seed: 145.0
141
+ slices:
142
+ - sources:
143
+ - layer_range: [0, 32]
144
+ model: unsloth/Llama-3.1-Storm-8B
145
+ parameters:
146
+ density: 0.95
147
+ weight: 0.28
148
+ - layer_range: [0, 32]
149
+ model: arcee-ai/Llama-3.1-SuperNova-Lite
150
+ parameters:
151
+ density: 0.9
152
+ weight: 0.27
153
+ - layer_range: [0, 32]
154
+ model: VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct
155
+ parameters:
156
+ density: 0.92
157
+ weight: 0.25
158
+ - layer_range: [0, 32]
159
+ model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
160
+ parameters:
161
+ density: 0.92
162
+ weight: 0.2
163
+ - layer_range: [0, 32]
164
+ model: unsloth/Meta-Llama-3.1-8B-Instruct
165
+ tokenizer_source: union
166
+ ```
167
 
168
  # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
169
  Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/T145__ZEUS-8B-V17-details)!