ycros commited on
Commit
a02e98c
1 Parent(s): 4c1e84f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -0
README.md CHANGED
@@ -13,6 +13,24 @@ tags:
13
  ---
14
  # DonutHole-8x7B
15
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16
  This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
17
 
18
  ## Merge Details
 
13
  ---
14
  # DonutHole-8x7B
15
 
16
+ Bagel, Mixtral Instruct, Holodeck, LimaRP.
17
+ > What mysteries lie in the hole of a donut?
18
+
19
+ Good with Alpaca prompt formats, also works with Mistral format. See usage details below.
20
+
21
+
22
+ ![image/webp](https://cdn-uploads.huggingface.co/production/uploads/63044fa07373aacccd8a7c53/VILuxGHvEPmDsn0YUX6Gh.webp)
23
+
24
+ This is similar to [BagelMIsteryTour](https://huggingface.co/ycros/BagelMIsteryTour-v2-8x7B), but I've swapped out Sensualize for the new Holodeck.
25
+ I'm not sure if it's better or not yet, or how it does at higher (8k+) contexts just yet.
26
+
27
+ Similar sampler advice applies as for BMT: minP (0.07 - 0.3 to taste) -> temp (either dynatemp 0-4ish, or like a temp of 3-4 with a smoothing factor of around 2.5ish).
28
+ And yes, that's temp last. It does okay without rep pen up to a point, it doesn't seem to get into a complete jam, but it can start to repeat sentences,
29
+ so you'll probably need some, perhaps 1.07 at a 1024 range seems okayish.
30
+ (rep pen sucks, but there are better things coming).
31
+
32
+ I've mainly tested with LimaRP style Alpaca prompts (instruction/input/response), and briefly with Mistral's own format.
33
+
34
  This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
35
 
36
  ## Merge Details