File size: 7,027 Bytes
6e30527
8b6af79
 
 
6e30527
 
 
 
3af16a4
8b6af79
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6e30527
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8b6af79
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- mergekit
- merge
- not-for-all-audiences
base_model:
- bamec66557/mergekit-slerp-uhnpbqg
- bamec66557/mergekit-slerp-tsgkafq
model-index:
- name: MISCHIEVOUS-12B-Mix_III_ex_V
  results:
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: IFEval (0-Shot)
      type: HuggingFaceH4/ifeval
      args:
        num_few_shot: 0
    metrics:
    - type: inst_level_strict_acc and prompt_level_strict_acc
      value: 43.16
      name: strict accuracy
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=bamec66557/MISCHIEVOUS-12B-Mix_III_ex_V
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: BBH (3-Shot)
      type: BBH
      args:
        num_few_shot: 3
    metrics:
    - type: acc_norm
      value: 34.87
      name: normalized accuracy
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=bamec66557/MISCHIEVOUS-12B-Mix_III_ex_V
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: MATH Lvl 5 (4-Shot)
      type: hendrycks/competition_math
      args:
        num_few_shot: 4
    metrics:
    - type: exact_match
      value: 13.14
      name: exact match
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=bamec66557/MISCHIEVOUS-12B-Mix_III_ex_V
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: GPQA (0-shot)
      type: Idavidrein/gpqa
      args:
        num_few_shot: 0
    metrics:
    - type: acc_norm
      value: 9.4
      name: acc_norm
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=bamec66557/MISCHIEVOUS-12B-Mix_III_ex_V
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: MuSR (0-shot)
      type: TAUR-Lab/MuSR
      args:
        num_few_shot: 0
    metrics:
    - type: acc_norm
      value: 12.77
      name: acc_norm
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=bamec66557/MISCHIEVOUS-12B-Mix_III_ex_V
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: MMLU-PRO (5-shot)
      type: TIGER-Lab/MMLU-Pro
      config: main
      split: test
      args:
        num_few_shot: 5
    metrics:
    - type: acc
      value: 29.43
      name: accuracy
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=bamec66557/MISCHIEVOUS-12B-Mix_III_ex_V
      name: Open LLM Leaderboard
---
# merge

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).

## Merge Details
### Merge Method

This model was merged using the SLERP merge method.

### Models Merged

The following models were included in the merge:
* [bamec66557/mergekit-slerp-uhnpbqg](https://huggingface.co/bamec66557/mergekit-slerp-uhnpbqg)
* [bamec66557/mergekit-slerp-tsgkafq](https://huggingface.co/bamec66557/mergekit-slerp-tsgkafq)

### Configuration

The following YAML configuration was used to produce this model:

```yaml
# Merging MISCHIEVOUS-12B-Mix Models with Sliced SLERP
base_model: bamec66557/mergekit-slerp-tsgkafq
dtype: bfloat16
merge_method: slerp
tokenizer_source: union

# Slices Configuration (Layer-Specific Merging)
slices:
  - name: initial_layers
    sources:
      - model: bamec66557/mergekit-slerp-uhnpbqg
        layer_range: [0, 10]
      - model: bamec66557/mergekit-slerp-tsgkafq
        layer_range: [0, 10]
    parameters:
      t:
        - name: self_attn
          value: [0.8, 0.85, 0.9, 0.95, 1.0]
        - name: mlp
          value: [0.9, 0.95, 1.0, 1.05, 1.1]
        - name: layer_norm
          value: [0.6, 0.65, 0.7, 0.75, 0.8]
        - name: embed_tokens
          value: [1.0]

  - name: middle_layers
    sources:
      - model: bamec66557/mergekit-slerp-uhnpbqg
        layer_range: [10, 20]
      - model: bamec66557/mergekit-slerp-tsgkafq
        layer_range: [10, 20]
    parameters:
      t:
        - name: self_attn
          value: [0.7, 0.75, 0.8, 0.85, 0.9]
        - name: mlp
          value: [1.0, 0.95, 0.9, 0.85, 0.8]
        - name: layer_norm
          value: [0.5, 0.55, 0.6, 0.65, 0.7]
        - name: embed_tokens
          value: [1.0]

  - name: upper_middle_layers
    sources:
      - model: bamec66557/mergekit-slerp-uhnpbqg
        layer_range: [20, 30]
      - model: bamec66557/mergekit-slerp-tsgkafq
        layer_range: [20, 30]
    parameters:
      t:
        - name: self_attn
          value: [0.6, 0.65, 0.7, 0.75, 0.8]
        - name: mlp
          value: [0.8, 0.75, 0.7, 0.65, 0.6]
        - name: layer_norm
          value: [0.4, 0.45, 0.5, 0.55, 0.6]
        - name: embed_tokens
          value: [1.0]

  - name: final_layers
    sources:
      - model: bamec66557/mergekit-slerp-uhnpbqg
        layer_range: [30, 40]
      - model: bamec66557/mergekit-slerp-tsgkafq
        layer_range: [30, 40]
    parameters:
      t:
        - name: self_attn
          value: [0.9, 1.0, 1.1, 1.2, 1.3]
        - name: mlp
          value: [0.7, 0.65, 0.6, 0.55, 0.5]
        - name: layer_norm
          value: [0.7, 0.75, 0.8, 0.85, 0.9]
        - name: embed_tokens
          value: [1.0]

# Regularization (Prevent Overfitting During Merging)
regularization:
  methods:
    - name: weight_clipping
      clip_range: [-0.2, 0.2]
    - name: random_noise
      scale: 0.015
    - name: l2_norm
      scale: 0.01

# Postprocessing (Enhance Merged Model Quality)
postprocessing:
  operations:
    - name: random_noise
      scale: 0.0025
    - name: non_linear_scaling
      parameters:
        function: tanh
    - name: sharpening
      intensity: 0.3
    - name: gaussian_smoothing
      sigma: 1.5
    - name: smoothing
      parameters:
        adaptive: true
        range: [0.8, 1.2]
        kernel_size: 5
    - name: normalize
    - name: dynamic_scaling
      scale_range: [0.75, 1.25]

# Optional: Ties Merging (Advanced Technique)
ties:
  enabled: false  # Set to true if ties merging is required
  method: greedy  # Options: greedy, optimal, random
  layers: [0, 10, 20, 30]  # Example layers for ties merging

```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/bamec66557__MISCHIEVOUS-12B-Mix_III_ex_V-details)

|      Metric       |Value|
|-------------------|----:|
|Avg.               |23.80|
|IFEval (0-Shot)    |43.16|
|BBH (3-Shot)       |34.87|
|MATH Lvl 5 (4-Shot)|13.14|
|GPQA (0-shot)      | 9.40|
|MuSR (0-shot)      |12.77|
|MMLU-PRO (5-shot)  |29.43|