File size: 7,889 Bytes
e43f9d2
210f20f
423cef0
 
 
 
e43f9d2
 
 
 
 
 
 
423cef0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e43f9d2
 
1aa9615
dea6c0b
 
 
3250956
dea6c0b
f257c20
cb31a31
dea6c0b
e43f9d2
 
 
 
 
 
dea6c0b
 
 
aa388ad
dea6c0b
 
 
e43f9d2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dea6c0b
e43f9d2
 
 
 
 
 
 
423cef0
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
---
license: apache-2.0
library_name: transformers
tags:
- mergekit
- merge
base_model:
- localfultonextractor/Erosumika-7B
- Nitral-AI/Infinitely-Laydiculous-7B
- Kunocchini-7b-128k-test
- Endevor/EndlessRP-v3-7B
- ChaoticNeutrals/BuRP_7B
- daybreak-kunoichi-2dpo-7b
model-index:
- name: Confluence-Renegade-7B
  results:
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: AI2 Reasoning Challenge (25-Shot)
      type: ai2_arc
      config: ARC-Challenge
      split: test
      args:
        num_few_shot: 25
    metrics:
    - type: acc_norm
      value: 31.91
      name: normalized accuracy
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Nekochu/Confluence-Renegade-7B
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: HellaSwag (10-Shot)
      type: hellaswag
      split: validation
      args:
        num_few_shot: 10
    metrics:
    - type: acc_norm
      value: 45.38
      name: normalized accuracy
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Nekochu/Confluence-Renegade-7B
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: MMLU (5-Shot)
      type: cais/mmlu
      config: all
      split: test
      args:
        num_few_shot: 5
    metrics:
    - type: acc
      value: 31.48
      name: accuracy
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Nekochu/Confluence-Renegade-7B
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: TruthfulQA (0-shot)
      type: truthful_qa
      config: multiple_choice
      split: validation
      args:
        num_few_shot: 0
    metrics:
    - type: mc2
      value: 51.47
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Nekochu/Confluence-Renegade-7B
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: Winogrande (5-shot)
      type: winogrande
      config: winogrande_xl
      split: validation
      args:
        num_few_shot: 5
    metrics:
    - type: acc
      value: 63.14
      name: accuracy
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Nekochu/Confluence-Renegade-7B
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: GSM8k (5-shot)
      type: gsm8k
      config: main
      split: test
      args:
        num_few_shot: 5
    metrics:
    - type: acc
      value: 0.0
      name: accuracy
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Nekochu/Confluence-Renegade-7B
      name: Open LLM Leaderboard
---

My first merge of RP models 7B using mergekit, They are just r/ trend RP, half is BuRP_7B. not used any, **Dumb** merge but hopfully lucky merge! ^^'  

## Update 03/2024:

- Original model Card Confluence-Renegade-7B <a href="https://huggingface.co/Nekochu/Confluence-Renegade-7B/tree/Confluence-Renegade-7B-v2-8.0bpw-h8-exl2">[8.0bpw-exl]</a> 
- Added Model and merge recipe branch: <a href="https://huggingface.co/Nekochu/Confluence-Renegade-7B/blob/Confluence-Renegade-7B-v2/mergekit_config.yml">Confluence-Renegade-7B-v2</a> 
- Added Model and merge recipe branch: <a href="https://huggingface.co/Nekochu/Confluence-Renegade-7B/blob/RoleBeagle-Moistral-11B-v2/mergekit_config.yml">RoleBeagle-Moistral-11B-v2</a> <a href="https://huggingface.co/Nekochu/Confluence-Renegade-7B/blob/RoleBeagle-Moistral-7B-v2/mergekit_config.yml">[7B truncated]</a> and Quants <a href="https://huggingface.co/Nekochu/Confluence-Renegade-7B/blob/RoleBeagle-Moistral-11B-v2-2.4bpw-h6-exl2/mergekit_config.yml">RoleBeagle-Moistral-11B-v2-2.4bpw-h6-exl2</a>, <a href="https://huggingface.co/Nekochu/Confluence-Renegade-7B/tree/RoleBeagle-Moistral-11B-v2-4.25bpw-h6-exl2">4.25bpw-h6</a>, <a href="https://huggingface.co/Nekochu/Confluence-Renegade-7B/tree/RoleBeagle-Moistral-11B-v2-8.0bpw-h8-exl2">8.0bpw-h8</a>
- Added Branch: <a href="https://huggingface.co/Nekochu/Confluence-Renegade-7B/blob/Confluence-Shortcake-20B/mergekit_config.yml"> Confluence-Shortcake-20B Model recipes</a> and Quants here <a href="https://huggingface.co/Nekochu/Confluence-Renegade-7B/tree/Confluence-Shortcake-20B-2.4bpw-h6-exl2">Confluence-Shortcake-20B-2.4bpw-h6-exl2</a>, <a href="https://huggingface.co/Nekochu/Confluence-Renegade-7B/tree/Confluence-Shortcake-20B-4.25bpw-h6-exl2">4.25bpw-h6</a>, <a href="https://huggingface.co/Nekochu/Confluence-Renegade-7B/tree/Confluence-Shortcake-20B-8.0bpw-h8-exl2">8.0bpw-h8</a>

<div style="width: auto; margin-left: auto; margin-right: auto">
    <img src="https://i.imgur.com/d38LuOG.png" alt="Nekochu" style="width: 250%; min-width: 400px; display: block; margin: auto;">
</div>

Name symbolize by *Confluence* for many unique RP model with *Renegade* mostly come from no-guardrail.

## Download branch instructions

```shell
git clone --single-branch --branch Confluence-Shortcake-20B-2.4bpw-h6-exl2 https://huggingface.co/Nekochu/Confluence-Renegade-7B
```

### Configuration Confluence-Renegade-7B

The following YAML configuration was used to produce this model:

```yaml
models:
  - model: ./modela/Erosumika-7B
    parameters:
      density: [1, 0.8, 0.6]
      weight: 0.2
  - model: ./modela/Infinitely-Laydiculous-7B
    parameters:
      density: [0.9, 0.7, 0.5]
      weight: 0.2
  - model: ./modela/Kunocchini-7b-128k-test
    parameters:
      density: [0.8, 0.6, 0.4]
      weight: 0.2
  - model: ./modela/EndlessRP-v3-7B
    parameters:
      density: [0.7, 0.5, 0.3]
      weight: 0.2
  - model: ./modela/daybreak-kunoichi-2dpo-7b
    parameters:
      density: [0.5, 0.3, 0.1]
      weight: 0.2
merge_method: dare_linear
base_model: ./modela/Mistral-7B-v0.1
parameters:
  normalize: true
  int8_mask: true
dtype: bfloat16
name: intermediate-model
---
slices:
  - sources:
      - model: intermediate-model
        layer_range: [0, 32]
      - model: ./modela/BuRP_7B
        layer_range: [0, 32]
merge_method: slerp
base_model: intermediate-model
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5 # fallback for rest of tensors
dtype: bfloat16
name: gradient-slerp
```

```mergekit-mega config.yml ./output-model-directory --cuda --allow-crimes --lazy-unpickle```

### Models Merged Confluence-Renegade-7B

The following models were included in the merge:
- [localfultonextractor/Erosumika-7B](https://huggingface.co/localfultonextractor/Erosumika-7B)
- [Nitral-AI/Infinitely-Laydiculous-7B](https://huggingface.co/Nitral-AI/Infinitely-Laydiculous-7B)
- [Kunocchini-7b-128k-test](https://huggingface.co/Nitral-AI/Kunocchini-7b-128k-test)
- [Endevor/EndlessRP-v3-7B](https://huggingface.co/Endevor/EndlessRP-v3-7B)
- [ChaoticNeutrals/BuRP_7B](https://huggingface.co/ChaoticNeutrals/BuRP_7B)
- [daybreak-kunoichi-2dpo-7b](https://huggingface.co/crestf411/daybreak-kunoichi-2dpo-7b)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Nekochu__Confluence-Renegade-7B)

|             Metric              |Value|
|---------------------------------|----:|
|Avg.                             |37.23|
|AI2 Reasoning Challenge (25-Shot)|31.91|
|HellaSwag (10-Shot)              |45.38|
|MMLU (5-Shot)                    |31.48|
|TruthfulQA (0-shot)              |51.47|
|Winogrande (5-shot)              |63.14|
|GSM8k (5-shot)                   | 0.00|