Update README.md
Browse files
README.md
CHANGED
@@ -48,6 +48,7 @@ Support me on Github Sponsors: https://github.com/sponsors/teknium1
|
|
48 |
- [BigBench](#bigbench)
|
49 |
- [Averages Compared](#averages-compared)
|
50 |
3. [Prompt Format](#prompt-format)
|
|
|
51 |
|
52 |
|
53 |
## Example Outputs
|
@@ -213,6 +214,9 @@ In LM-Studio, simply select the ChatML Prefix on the settings side pane:
|
|
213 |
|
214 |
# Quantized Models:
|
215 |
|
216 |
-
|
|
|
|
|
|
|
217 |
|
218 |
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
|
|
48 |
- [BigBench](#bigbench)
|
49 |
- [Averages Compared](#averages-compared)
|
50 |
3. [Prompt Format](#prompt-format)
|
51 |
+
4. [Quantized Models](#quantized-models)
|
52 |
|
53 |
|
54 |
## Example Outputs
|
|
|
214 |
|
215 |
# Quantized Models:
|
216 |
|
217 |
+
The Bloke has quantized Open Hermes 2 in GPTQ, GGUF, and AWQ! Avialable here:
|
218 |
+
https://huggingface.co/TheBloke/OpenHermes-2-Mistral-7B-GPTQ
|
219 |
+
https://huggingface.co/TheBloke/OpenHermes-2-Mistral-7B-GGUF
|
220 |
+
https://huggingface.co/TheBloke/OpenHermes-2-Mistral-7B-AWQ
|
221 |
|
222 |
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|