Update README.md
Browse files
README.md
CHANGED
@@ -7,6 +7,8 @@ tags:
|
|
7 |
---
|
8 |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6468ce47e134d050a58aa89c/x44nNbPTpv0zGTqA1Jb2q.png)
|
9 |
|
|
|
|
|
10 |
### *Weights*
|
11 |
|
12 |
- [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B): 0.5
|
@@ -42,4 +44,20 @@ You can use these prompt templates, but I recommend using ChatML.
|
|
42 |
### User:
|
43 |
{user}
|
44 |
### Assistant:
|
45 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
7 |
---
|
8 |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6468ce47e134d050a58aa89c/x44nNbPTpv0zGTqA1Jb2q.png)
|
9 |
|
10 |
+
_Note: [Intel/neural-chat-7b-v3-1](https://huggingface.co/Intel/neural-chat-7b-v3-1) merge version is available [here](https://huggingface.co/Weyaxi/OpenHermes-2.5-neural-chat-7b-v3-1-7B/)_
|
11 |
+
|
12 |
### *Weights*
|
13 |
|
14 |
- [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B): 0.5
|
|
|
44 |
### User:
|
45 |
{user}
|
46 |
### Assistant:
|
47 |
+
```
|
48 |
+
|
49 |
+
# Quantizationed versions
|
50 |
+
|
51 |
+
Quantizationed versions of this model is available thanks to [TheBloke](https://hf.co/TheBloke).
|
52 |
+
|
53 |
+
##### GPTQ
|
54 |
+
|
55 |
+
- [TheBloke/OpenHermes-2.5-neural-chat-7B-v3-2-7B-GPTQ](https://huggingface.co/TheBloke/OpenHermes-2.5-neural-chat-7B-v3-1-7B-GPTQ)
|
56 |
+
|
57 |
+
##### GGUF
|
58 |
+
|
59 |
+
- [TheBloke/OpenHermes-2.5-neural-chat-7B-v3-2-7B-GGUF](https://huggingface.co/TheBloke/OpenHermes-2.5-neural-chat-7B-v3-1-7B-GGUF)
|
60 |
+
|
61 |
+
##### AWQ
|
62 |
+
|
63 |
+
- [TheBloke/OpenHermes-2.5-neural-chat-7B-v3-2-7B-AWQ](https://huggingface.co/TheBloke/OpenHermes-2.5-neural-chat-7B-v3-2-7B-AWQ)
|