munish0838 commited on
Commit
42b6ff8
1 Parent(s): 8975a1f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -4
README.md CHANGED
@@ -121,10 +121,6 @@ code {
121
  <p>L3-Aethora-15B was crafted through using the abilteration method to adjust model responses. The model's refusal is inhibited, focusing on yielding more compliant and facilitative dialogue interactions. It then underwent a modified DUS (Depth Up Scale) merge (originally used by @Elinas) by using passthrough merge to create a 15b model, with specific adjustments (zeroing) to 'o_proj' and 'down_proj', enhancing its efficiency and reducing perplexity. This created AbL3In-15b.<br>
122
  <p>AbL3In-15b was then trained for 4 epochs using Rslora & DORA training methods on the Aether-Lite-V1.2 dataset, containing ~82000 high quality samples, designed to strike a fine balance between creativity, slop, and intelligence at about a 60/40 split</p>
123
  <p>This model is trained on the L3 prompt format.</p>
124
- <h2>Quants:</h2>
125
- <li><a href="https://huggingface.co/mradermacher/L3-Aethora-15B-GGUF" target="_blank">Mradermacher/L3-Aethora-15B-GGUF</a></li>
126
- <li><a href="https://huggingface.co/mradermacher/L3-Aethora-15B-i1-GGUF" target="_blank">Mradermacher/L3-Aethora-15B-i1-GGUF</a></li>
127
- <li><a href="https://huggingface.co/NikolayKozloff" target="_blank">NikolayKozloff/L3-Aethora-15B-GGUF</a></li>
128
  <p></p>
129
  <h2>Dataset Summary: (Filtered)</h2>
130
  <p>Filtered Phrases: GPTslop, Claudism's</p>
 
121
  <p>L3-Aethora-15B was crafted through using the abilteration method to adjust model responses. The model's refusal is inhibited, focusing on yielding more compliant and facilitative dialogue interactions. It then underwent a modified DUS (Depth Up Scale) merge (originally used by @Elinas) by using passthrough merge to create a 15b model, with specific adjustments (zeroing) to 'o_proj' and 'down_proj', enhancing its efficiency and reducing perplexity. This created AbL3In-15b.<br>
122
  <p>AbL3In-15b was then trained for 4 epochs using Rslora & DORA training methods on the Aether-Lite-V1.2 dataset, containing ~82000 high quality samples, designed to strike a fine balance between creativity, slop, and intelligence at about a 60/40 split</p>
123
  <p>This model is trained on the L3 prompt format.</p>
 
 
 
 
124
  <p></p>
125
  <h2>Dataset Summary: (Filtered)</h2>
126
  <p>Filtered Phrases: GPTslop, Claudism's</p>