Steelskull
commited on
Commit
•
3b30fcb
1
Parent(s):
769a292
Update README.md
Browse files
README.md
CHANGED
@@ -124,8 +124,8 @@ v2 = its the second version
|
|
124 |
<p>This model is my second attempt at a 72b model, as usual, my goal is to merge the robust storytelling of mutiple models while attempting to maintain intelligence.</p>
|
125 |
<p>Use qwen format</p>
|
126 |
<h2>Quants: (List of badasses)</h2>
|
127 |
-
|
128 |
-
<p> - bartowski: <a href="https://huggingface.co/bartowski/Q2.5-MS-Mistoria-72b-GGUF" target="_blank"> Combined-GGUF </a></p>
|
129 |
<p> - mradermacher: <a href="https://huggingface.co/mradermacher/Q2.5-MS-Mistoria-72b-v2-GGUF" target="_blank"> GGUF </a>// <a href="https://huggingface.co/mradermacher/Q2.5-MS-Mistoria-72b-v2-i1-GGUF" target="_blank"> Imat-GGUF </a></p>
|
130 |
<h3>Config:</h3>
|
131 |
<pre><code>MODEL_NAME = "Q2.5-MS-Mistoria-72b-v2"
|
|
|
124 |
<p>This model is my second attempt at a 72b model, as usual, my goal is to merge the robust storytelling of mutiple models while attempting to maintain intelligence.</p>
|
125 |
<p>Use qwen format</p>
|
126 |
<h2>Quants: (List of badasses)</h2>
|
127 |
+
<p>GGUF Quant: </p>
|
128 |
+
<p> - bartowski: <a href="https://huggingface.co/bartowski/Q2.5-MS-Mistoria-72b-v2-GGUF" target="_blank"> Combined-GGUF </a></p>
|
129 |
<p> - mradermacher: <a href="https://huggingface.co/mradermacher/Q2.5-MS-Mistoria-72b-v2-GGUF" target="_blank"> GGUF </a>// <a href="https://huggingface.co/mradermacher/Q2.5-MS-Mistoria-72b-v2-i1-GGUF" target="_blank"> Imat-GGUF </a></p>
|
130 |
<h3>Config:</h3>
|
131 |
<pre><code>MODEL_NAME = "Q2.5-MS-Mistoria-72b-v2"
|