schnapper79
commited on
Commit
•
2fe6a60
1
Parent(s):
d471b06
Update README.md
Browse files
README.md
CHANGED
@@ -1,26 +1,40 @@
|
|
1 |
---
|
|
|
|
|
|
|
2 |
base_model: []
|
3 |
library_name: transformers
|
4 |
tags:
|
5 |
- mergekit
|
6 |
-
-
|
7 |
|
8 |
---
|
9 |
-
#
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
10 |
|
11 |
-
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
12 |
|
13 |
## Merge Details
|
14 |
### Merge Method
|
15 |
|
16 |
-
This model was merged using the della_linear merge method using
|
17 |
-
|
18 |
-
### Models Merged
|
19 |
|
20 |
-
The following models were included in the merge:
|
21 |
-
* /workspace/text-generation-webui/models/anthracite-org_magnum-v2-123b
|
22 |
-
* /workspace/text-generation-webui/models/FluffyKaeloky_Luminum-v0.1-123B
|
23 |
-
* /workspace/text-generation-webui/models/migtissera_Tess-3-Mistral-Large-2-123B
|
24 |
|
25 |
### Configuration
|
26 |
|
@@ -28,20 +42,20 @@ The following YAML configuration was used to produce this model:
|
|
28 |
|
29 |
```yaml
|
30 |
models:
|
31 |
-
- model:
|
32 |
parameters:
|
33 |
weight: 0.24
|
34 |
density: 0.5
|
35 |
-
- model:
|
36 |
parameters:
|
37 |
weight: 0.34
|
38 |
density: 0.8
|
39 |
-
- model:
|
40 |
parameters:
|
41 |
weight: 0.24
|
42 |
density: 0.9
|
43 |
merge_method: della_linear
|
44 |
-
base_model:
|
45 |
parameters:
|
46 |
epsilon: 0.05
|
47 |
lambda: 1
|
|
|
1 |
---
|
2 |
+
license: other
|
3 |
+
license_name: mistral-ai-research-licence
|
4 |
+
license_link: https://mistral.ai/licenses/MRL-0.1.md
|
5 |
base_model: []
|
6 |
library_name: transformers
|
7 |
tags:
|
8 |
- mergekit
|
9 |
+
- lumikabra-123B
|
10 |
|
11 |
---
|
12 |
+
# lumikabra-123B v0.2
|
13 |
+
|
14 |
+
|
15 |
+
<div style="width: auto; margin-left: auto; margin-right: auto; margin-bottom: 3cm">
|
16 |
+
<img src="https://huggingface.co/schnapper79/lumikabra-123B_v0.1/resolve/main/lumikabra.png" alt="Lumikabra" style="width: 100%; min-width: 400px; display: block; margin: auto;">
|
17 |
+
</div>
|
18 |
+
|
19 |
+
This is lumikabra. It's based on [Mistral-Large-Instruct-2407 ](https://huggingface.co/mistralai/Mistral-Large-Instruct-2407), merged with Magnum-v2-123B, Luminum-v0.1-123B and Tess-3-Mistral-Large-2-123B.
|
20 |
+
|
21 |
+
I shamelessly took this idea from [FluffyKaeloky](https://huggingface.co/FluffyKaeloky/Luminum-v0.1-123B). Like him, i always had my troubles with each of the current large mistral based models.
|
22 |
+
Either it gets repetitive, shows too many GPTisms, is too horny or too unhorny. RP and storytelling is always a matter of taste, and i found myself swiping too often for new answers or even fixing them when I missed a little spice or cleverness.
|
23 |
+
|
24 |
+
Luminum was a great improvement, mixing a lot of desired traits, but I still missed some spice, another sauce.
|
25 |
+
So i took Luminum, added magnum again and also Tess for knowledge and structure.
|
26 |
+
|
27 |
+
This is a second version with another mixture of the same sauce. It is different than v0.1, not worse not better, just a little different. Again, I believe it is just a matter of taste, which answers one prefers and like the most.
|
28 |
+
|
29 |
+
## Quants
|
30 |
+
- [exl2-8.0](https://huggingface.co/schnapper79/lumikabra-123B_v0.2-exl2-8.0)
|
31 |
|
|
|
32 |
|
33 |
## Merge Details
|
34 |
### Merge Method
|
35 |
|
36 |
+
This model was merged using [mergekit](https://github.com/cg123/mergekit) with the della_linear merge method using mistralai_Mistral-Large-Instruct-2407 as a base.
|
|
|
|
|
37 |
|
|
|
|
|
|
|
|
|
38 |
|
39 |
### Configuration
|
40 |
|
|
|
42 |
|
43 |
```yaml
|
44 |
models:
|
45 |
+
- model: anthracite-org_magnum-v2-123b
|
46 |
parameters:
|
47 |
weight: 0.24
|
48 |
density: 0.5
|
49 |
+
- model: FluffyKaeloky_Luminum-v0.1-123B
|
50 |
parameters:
|
51 |
weight: 0.34
|
52 |
density: 0.8
|
53 |
+
- model: migtissera_Tess-3-Mistral-Large-2-123B
|
54 |
parameters:
|
55 |
weight: 0.24
|
56 |
density: 0.9
|
57 |
merge_method: della_linear
|
58 |
+
base_model: mistralai_Mistral-Large-Instruct-2407
|
59 |
parameters:
|
60 |
epsilon: 0.05
|
61 |
lambda: 1
|