Update README.md
Browse files
README.md
CHANGED
@@ -21,6 +21,13 @@ I was clearly wrong when I said V2 would be difficult to improve on, because V3
|
|
21 |
|
22 |
The one thing I do want to improve on is finding a better conversational model than Meta-Llama-3-8B-Instruct; it's good for that use case, but I'm sure there's a better one out there. I tried using llama-3-cat-8b-instruct-v1, but it absolutely tanked the model's situational awareness and kept making blatantly contradictory statements.
|
23 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
24 |
# Details
|
25 |
- **License**: [llama3](https://llama.meta.com/llama3/license/)
|
26 |
- **Instruct Format**: [llama-3](https://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-3/)
|
|
|
21 |
|
22 |
The one thing I do want to improve on is finding a better conversational model than Meta-Llama-3-8B-Instruct; it's good for that use case, but I'm sure there's a better one out there. I tried using llama-3-cat-8b-instruct-v1, but it absolutely tanked the model's situational awareness and kept making blatantly contradictory statements.
|
23 |
|
24 |
+
# Quantization Formats
|
25 |
+
**GGUF**
|
26 |
+
- Static:
|
27 |
+
- https://huggingface.co/mradermacher/Llama-Salad-4x8B-V3-GGUF
|
28 |
+
- Imatrix:
|
29 |
+
- https://huggingface.co/mradermacher/Llama-Salad-4x8B-V3-i1-GGUF
|
30 |
+
|
31 |
# Details
|
32 |
- **License**: [llama3](https://llama.meta.com/llama3/license/)
|
33 |
- **Instruct Format**: [llama-3](https://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-3/)
|