FluffyKaeloky commited on
Commit
00e25f4
1 Parent(s): fb0c810

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +31 -10
README.md CHANGED
@@ -6,25 +6,27 @@ tags:
6
  - merge
7
 
8
  ---
9
- # LuminumMistral-123B
10
 
11
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
 
 
 
12
 
13
- ## Merge Details
14
 
15
- I present Luminum.
 
 
16
 
17
  This is a merge using Mistral Large as a base, and including Lumimaid-v0.2-123B and Magnum-v2-123B.
18
- I felt like Magnum was rambling too much, and Lumimaid lost slightly too much brain power, so I used Mistral Large base, but it was lacking some moist.
19
 
20
- On a whim, I decided to merge both Lumimaid and Magnum on top of Mistral large, and while I wasn't expecting much, I've been very surprised with the results.
21
 
22
  I've tested this model quite extensively at and above 32k with great success. It should in theory allow for the full 128k context, albeit I've only went to 40-50k max.
23
  It's become my new daily driver.
24
 
25
-
26
- I'll update the model card and add artwork tomorrow, am tired.
27
-
28
 
29
  I recommend thoses settings:
30
  - Minp: 0.08
@@ -34,7 +36,26 @@ I recommend thoses settings:
34
  - No Repeat NGram Size: 2 *
35
 
36
 
37
- *Since I am using TabbyAPI and exl2, DRY sampling just got available to me. I haven't tried it yet, but it's probable that using DRY would be better than NGram.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
38
 
39
 
40
  ### Merge Method
 
6
  - merge
7
 
8
  ---
 
9
 
10
+ <div style="width: auto; margin-left: auto; margin-right: auto">
11
+ <img src="https://huggingface.co/FluffyKaeloky/Luminum-v0.1-123B/resolve/main/LuminumCover.png" alt="Luminum" style="width: 60%; min-width: 400px; display: block; margin: auto;">
12
+ </div>
13
+
14
 
15
+ # LuminumMistral-123B
16
 
17
+ ## Overview
18
+
19
+ I present Luminum-123B.
20
 
21
  This is a merge using Mistral Large as a base, and including Lumimaid-v0.2-123B and Magnum-v2-123B.
22
+ I felt like Magnum was rambling too much, and Lumimaid lost slightly too much brain power, so I used Mistral Large base for a long while, but it was lacking some moist.
23
 
24
+ On a whim, I decided to merge both Lumimaid and Magnum on top of Mistral large, and while I wasn't expecting much, I've been very surprised with the results. I've found that this model keeps the brain power from Mistral base, and also inherits the lexique of Lumimaid and creative descriptions of Magnum, without rambling too much.
25
 
26
  I've tested this model quite extensively at and above 32k with great success. It should in theory allow for the full 128k context, albeit I've only went to 40-50k max.
27
  It's become my new daily driver.
28
 
29
+ The only negative thing I could find is that it tends to generate long responses if you let it. It probably gets that from Magnum. Just don't let it grow its answer size over and over.
 
 
30
 
31
  I recommend thoses settings:
32
  - Minp: 0.08
 
36
  - No Repeat NGram Size: 2 *
37
 
38
 
39
+ *I didn't get the chance to mess with DRY yet.
40
+
41
+ ## Template
42
+
43
+ All the merged models use Mistral template, this one too.
44
+
45
+ ```
46
+ <s>[INST] {input} [/INST] {output}</s>
47
+ ```
48
+
49
+ -------------------
50
+
51
+ # Quants
52
+
53
+ ## EXL2
54
+
55
+ * [4.0bpw](https://huggingface.co/FluffyKaeloky/Luminum-v0.1-123B-exl2-4.0bpw)
56
+
57
+
58
+ -------------------
59
 
60
 
61
  ### Merge Method