DavidAU commited on
Commit
07adc00
1 Parent(s): a50a8e4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -9
README.md CHANGED
@@ -40,7 +40,7 @@ pipeline_tag: text-generation
40
 
41
  <B><font color="red">WARNING:</font> NSFW. Vivid prose. INTENSE. Visceral Details. Violence. HORROR. GORE. Swearing. UNCENSORED... humor, romance, fun. </B>
42
 
43
- <h2>L3-4X8B-MOE-Dark-Planet-Infinite-25B-GGUF</h2>
44
 
45
  <img src="dark-p-infinite.jpg" style="float:right; width:300px; height:300px; padding:10px;">
46
 
@@ -96,14 +96,6 @@ Example outputs below.
96
 
97
  This model is comprised of the following 4 models ("the experts") (in full):
98
 
99
- [ https://huggingface.co/DavidAU/L3-Dark-Planet-8B-GGUF ]
100
-
101
- [ https://huggingface.co/DavidAU/L3-Dark-Planet-8B-V2-Eight-Orbs-Of-Power-GGUF ]
102
-
103
- [ https://huggingface.co/DavidAU/L3-Dark-Planet-Ring-World-8B-F32-GGUF ]
104
-
105
- [ https://huggingface.co/DavidAU/L3.1-Dark-Planet-SpinFire-Uncensored-8B-GGUF ]
106
-
107
  The mixture of experts is set at 2 experts, but you can use 3 or 4 too.
108
 
109
  This "team" has a Captain (first listed model), and then all the team members contribute to the to "token"
@@ -148,6 +140,14 @@ Special credit goes to MERGEKIT, without you this project / model would not have
148
 
149
  [ https://github.com/arcee-ai/mergekit ]
150
 
 
 
 
 
 
 
 
 
151
  <B>Special Operations Notes for this MOE model:</B>
152
 
153
  Because of how this "MOE" model is configured, even though the default is 2 experts, the "selected" 2 will vary during generation.
@@ -208,6 +208,13 @@ This repo contains regular quants and 3 "ARM" quants (format "...Q4_x_x_x.gguf")
208
 
209
  For more information on quants, quants choices, and LLM/AI apps to "run" quants see the section below: "Highest Quality Settings..."
210
 
 
 
 
 
 
 
 
211
 
212
  <B>Template:</B>
213
 
 
40
 
41
  <B><font color="red">WARNING:</font> NSFW. Vivid prose. INTENSE. Visceral Details. Violence. HORROR. GORE. Swearing. UNCENSORED... humor, romance, fun. </B>
42
 
43
+ <h2>L3-MOE-4X8B-Grand-Horror-25B-GGUF</h2>
44
 
45
  <img src="dark-p-infinite.jpg" style="float:right; width:300px; height:300px; padding:10px;">
46
 
 
96
 
97
  This model is comprised of the following 4 models ("the experts") (in full):
98
 
 
 
 
 
 
 
 
 
99
  The mixture of experts is set at 2 experts, but you can use 3 or 4 too.
100
 
101
  This "team" has a Captain (first listed model), and then all the team members contribute to the to "token"
 
140
 
141
  [ https://github.com/arcee-ai/mergekit ]
142
 
143
+ Special thanks to Team "Mradermacher":
144
+
145
+ They saved me a tonne of time uploading the quants and created IMATRIX quants too.
146
+
147
+ IMATRIX GGUFS:
148
+
149
+ [ https://huggingface.co/mradermacher/L3-MOE-4X8B-Grand-Horror-25B-i1-GGUF ]
150
+
151
  <B>Special Operations Notes for this MOE model:</B>
152
 
153
  Because of how this "MOE" model is configured, even though the default is 2 experts, the "selected" 2 will vary during generation.
 
208
 
209
  For more information on quants, quants choices, and LLM/AI apps to "run" quants see the section below: "Highest Quality Settings..."
210
 
211
+ Special thanks to Team "Mradermacher":
212
+
213
+ They saved me a tonne of time uploading the quants and created IMATRIX quants too.
214
+
215
+ IMATRIX GGUFS:
216
+
217
+ [ https://huggingface.co/mradermacher/L3-MOE-4X8B-Grand-Horror-25B-i1-GGUF ]
218
 
219
  <B>Template:</B>
220