MarsupialAI commited on
Commit
4c0b98f
1 Parent(s): 380c2bf

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +17 -1
README.md CHANGED
@@ -7,4 +7,20 @@ tags:
7
  - ERP
8
  - chat
9
  - storywriting
10
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  - ERP
8
  - chat
9
  - storywriting
10
+ ---
11
+ # Kitchen Sink 103b
12
+
13
+
14
+ This model is a rotating-stack merge of three 70b models, each of which is a merge of multiple merges and finetunes. The result is a large model that contains a little bit of everything - including the kitchen sink.
15
+
16
+ Component models for this stack are
17
+ - royallab/Aetheria-L2-70B
18
+ - lizpreciatior/lzlv_70b_fp16_hf
19
+ - Sao10K/WinterGoddess-1.4x-70B-L2
20
+
21
+ Components of those models include Nous-Hermes-Llama2-70b, Xwin-LM-7B-V0.1, Mythospice-70b, Euryale-1.3-L2-70B, tulu-2-dpo-70b, GOAT-70B-Storytelling, Platypus2-70B-instruct, Lila-70B, SunsetBoulevard, and some private LoRAs.
22
+
23
+
24
+ # WTF is a rotating-stack merge?
25
+
26
+ Jeb Carter found that the performance of a stacked merge could be significantly improved by reversing the model order in the stack, and then doing a linear merge between the original and reversed stacks. That is what I did here, creating three stacked merges with the three source models, and then doing a 1:1:1 linear merge of all three stacks. The exact merge configs can be found in the recipe.txt file.