Update README.md
Browse files
README.md
CHANGED
@@ -1,9 +1,18 @@
|
|
1 |
---
|
2 |
-
base_model:
|
|
|
|
|
|
|
|
|
3 |
library_name: transformers
|
4 |
tags:
|
5 |
- mergekit
|
6 |
- merge
|
|
|
|
|
|
|
|
|
|
|
7 |
|
8 |
---
|
9 |
# IceLemonTeaRP-32k-7b
|
@@ -11,6 +20,12 @@ tags:
|
|
11 |
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
12 |
|
13 |
## Merge Details
|
|
|
|
|
|
|
|
|
|
|
|
|
14 |
### Merge Method
|
15 |
|
16 |
This model was merged using the SLERP merge method.
|
@@ -18,8 +33,10 @@ This model was merged using the SLERP merge method.
|
|
18 |
### Models Merged
|
19 |
|
20 |
The following models were included in the merge:
|
21 |
-
*
|
22 |
-
*
|
|
|
|
|
23 |
|
24 |
### Configuration
|
25 |
|
@@ -29,12 +46,12 @@ The following YAML configuration was used to produce this model:
|
|
29 |
|
30 |
slices:
|
31 |
- sources:
|
32 |
-
- model:
|
33 |
layer_range: [0, 32]
|
34 |
-
- model:
|
35 |
layer_range: [0, 32]
|
36 |
merge_method: slerp
|
37 |
-
base_model:
|
38 |
parameters:
|
39 |
t:
|
40 |
- filter: self_attn
|
|
|
1 |
---
|
2 |
+
base_model:
|
3 |
+
- icefog72/Kunokukulemonchini-32k-7b
|
4 |
+
- icefog72/Mixtral_AI_Cyber_3.m1-BigL
|
5 |
+
- LeroyDyer/Mixtral_AI_Cyber_3.m1
|
6 |
+
- Undi95/BigL-7B
|
7 |
library_name: transformers
|
8 |
tags:
|
9 |
- mergekit
|
10 |
- merge
|
11 |
+
- alpaca
|
12 |
+
- mistral
|
13 |
+
- not-for-all-audiences
|
14 |
+
- nsfw
|
15 |
+
license: cc-by-nc-4.0
|
16 |
|
17 |
---
|
18 |
# IceLemonTeaRP-32k-7b
|
|
|
20 |
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
21 |
|
22 |
## Merge Details
|
23 |
+
|
24 |
+
Cooked merge to fix [icefog72/IceTeaRP-7b](https://huggingface.co/icefog72/IceTeaRP-7b) repetition problems.
|
25 |
+
|
26 |
+
Prompt template: Alpaca, maybe ChatML
|
27 |
+
|
28 |
+
measurement.json for quanting exl2 included.
|
29 |
### Merge Method
|
30 |
|
31 |
This model was merged using the SLERP merge method.
|
|
|
33 |
### Models Merged
|
34 |
|
35 |
The following models were included in the merge:
|
36 |
+
* icefog72/Kunokukulemonchini-32k-7b
|
37 |
+
* icefog72/Mixtral_AI_Cyber_3.m1-BigL
|
38 |
+
* [LeroyDyer/Mixtral_AI_Cyber_3.m1](https://huggingface.co/LeroyDyer/Mixtral_AI_Cyber_3.m1)
|
39 |
+
* [Undi95/BigL-7B](https://huggingface.co/Undi95/BigL-7B)
|
40 |
|
41 |
### Configuration
|
42 |
|
|
|
46 |
|
47 |
slices:
|
48 |
- sources:
|
49 |
+
- model: Mixtral_AI_Cyber_3.m1-BigL
|
50 |
layer_range: [0, 32]
|
51 |
+
- model: Kunokukulemonchini-32k-7b
|
52 |
layer_range: [0, 32]
|
53 |
merge_method: slerp
|
54 |
+
base_model: Kunokukulemonchini-32k-7b
|
55 |
parameters:
|
56 |
t:
|
57 |
- filter: self_attn
|