lizpreciatior commited on
Commit
3757b8a
1 Parent(s): f2f65ad

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -20
README.md CHANGED
@@ -1,50 +1,42 @@
1
  ---
2
- license: cc-by-sa-4.0
3
  ---
4
 
5
 
6
  # lzlv_70B
7
- ## A Mythomax/MLewd_13B style merge of 70B models
8
 
9
- A multi-model merge of several different LLaMA2 70B finetunes for roleplaying and creative work. The goal was to create a model that combined creativity with intelligence and prompt following capabilities for an enhanced experience.
10
 
11
- Did it work? I think it did. Probably.
12
 
13
 
14
  ## Procedure:
15
 
16
  Models used:
17
  - NousResearch/Nous-Hermes-Llama2-70b - A great model for roleplaying, but not the best at following complex instructions.
18
- - Xwin-LM/Xwin-LM-7B-V0.1 - Excellent at following instructions and quite creative with some drawbacks, has been my main model since release so I know it quite well.
19
- - Doctor-Shotgun/Mythospice-70b - The joker of the three. I was looking for a creative, NSFW-oriented model and came across this while digging through hf. I had never heard of it before, and apparently no one had bothered to release a quantized version of this model. So I downloaded it and did it myself to test it. It turned out to be more or less what I was looking for as my third component, so I used it here.
20
 
21
  A big thank you to the creators of the models above. If you look up Mythospice, you will notice that it also includes Nous-Hermes so it's technically present twice in this mix. This is common practice in 13B merges so I didn't bother to correct it her either.
22
 
23
 
24
- The merging process was heavily inspired by Undi95's approach in Undi95/MXLewdMini-L2-13B. I chose three of my favourite models that seemed to complement each other, and adjusted the ratios according to my preference.
25
 
26
- To be specific, the ratios are:
27
 
28
  Component 1: Merge of Mythospice x Xwin with SLERP gradient [0.25, 0.3, 0.5].
29
  Component 2: Merge Xwin x Hermes with SLERP gradient [0.4, 0.3, 0.25].
30
 
31
- Finally, both Component 1 and Component 2 were merged SLERP weight 0.5
32
 
33
  ## Advantages
34
 
35
- I tested this model for a day before publishing it. It seems to retain the instruction-following capabilities of Xwin-70B, while seeming to have adapted a lot of the creativity of the other two models.
36
- It handls the elaborate instructions in my more complex roleplay SillyTavern cards about as well as the best 70b-Instruct models I've tested. More creative models like Hermes/Mythospice tend to struggle here. At the same time, it seemed to display enhanced creativity that previous go-to models did not have.
37
- So, is it better? Feels like it to me, subjectively. Is it really better? I don't know, try it for yourself.
38
 
39
  ## Prompt format:
40
  Vicuna
41
  USER: [Prompt]
42
  ASSISTANT:
43
-
44
-
45
-
46
- ## NSFW
47
- Due to the nature of some of the models that make up this merge, it can and will produce inappropriate content when prompted. Jailbreaking is not required. The same is the case the other way around: if you ask it directly to commit a hate crime with 0 additional context, it may refuse to do so for the first regeneration or two. This should never happen with more complex prompts or when it's playing a character.
48
- So be careful (or not).
49
-
50
-
 
1
  ---
2
+ license: cc-by-nc-2.0
3
  ---
4
 
5
 
6
  # lzlv_70B
7
+ ## A Mythomax/MLewd_13B style merge of selected 70B models
8
 
9
+ A multi-model merge of several LLaMA2 70B finetunes for roleplaying and creative work. The goal was to create a model that combines creativity with intelligence for an enhanced experience.
10
 
11
+ Did it work? Probably, maybe.
12
 
13
 
14
  ## Procedure:
15
 
16
  Models used:
17
  - NousResearch/Nous-Hermes-Llama2-70b - A great model for roleplaying, but not the best at following complex instructions.
18
+ - Xwin-LM/Xwin-LM-7B-V0.1 - Excellent at following instructions and quite creative out of the box, so it seemed like the best available model to act as the base of the merge.
19
+ - Doctor-Shotgun/Mythospice-70b - The wildcard of the three. I was looking for a creative, NSFW-oriented model and came across this while digging through hf. I had never heard of it before, and apparently no one had bothered to release a quantized version of this model. So I downloaded it and did it myself to test it. It turned out to be more or less what I was looking for as my third component, so I used it here.
20
 
21
  A big thank you to the creators of the models above. If you look up Mythospice, you will notice that it also includes Nous-Hermes so it's technically present twice in this mix. This is common practice in 13B merges so I didn't bother to correct it her either.
22
 
23
 
24
+ I chose three of my favourite models that seemed to complement each other, and adjusted the ratios according to my preference.
25
 
26
+ The merging process was heavily inspired by Undi95's approach in Undi95/MXLewdMini-L2-13B. To be specific, the ratios are:
27
 
28
  Component 1: Merge of Mythospice x Xwin with SLERP gradient [0.25, 0.3, 0.5].
29
  Component 2: Merge Xwin x Hermes with SLERP gradient [0.4, 0.3, 0.25].
30
 
31
+ Finally, both Component 1 and Component 2 were merged with SLERP using weight 0.5.
32
 
33
  ## Advantages
34
 
35
+ I tested this model for a few days before publishing it. It seems to retain the instruction-following capabilities of Xwin-70B, while seeming to have adapted a lot of the creativity of the other two models.
36
+ It handles my more complex scenarios that creative models otherwise tend to struggle with quite well. At the same time, its outputs felt more creative and possibly a bit more nsfw-inclined than Xwin-70b.
37
+ So, is it better? Feels like it to me, subjectively. Is it really better? No clue, test it.
38
 
39
  ## Prompt format:
40
  Vicuna
41
  USER: [Prompt]
42
  ASSISTANT: