Update README.md
Browse files
README.md
CHANGED
@@ -119,11 +119,11 @@ The DPO fine-tuning successfully recovers the performance loss due to the ablite
|
|
119 |
|
120 |
NeuralDaredevil-8B-abliterated performs better than the Instruct model on my tests.
|
121 |
|
122 |
-
You can use it for any application that doesn't require alignment, like role-playing. Tested on LM Studio using the "Llama 3"
|
123 |
|
124 |
## ⚡ Quantization
|
125 |
|
126 |
-
Thanks to QuantFactory, ZeroWw, Zoyd, and
|
127 |
|
128 |
* **GGUF**: https://huggingface.co/QuantFactory/NeuralDaredevil-8B-abliterated-GGUF
|
129 |
* **GGUF (FP16)**: https://huggingface.co/ZeroWw/NeuralDaredevil-8B-abliterated-GGUF
|
|
|
119 |
|
120 |
NeuralDaredevil-8B-abliterated performs better than the Instruct model on my tests.
|
121 |
|
122 |
+
You can use it for any application that doesn't require alignment, like role-playing. Tested on LM Studio using the "Llama 3" and "Llama 3 v2" presets.
|
123 |
|
124 |
## ⚡ Quantization
|
125 |
|
126 |
+
Thanks to QuantFactory, ZeroWw, Zoyd, solidrust, and tarruda for providing these quants.
|
127 |
|
128 |
* **GGUF**: https://huggingface.co/QuantFactory/NeuralDaredevil-8B-abliterated-GGUF
|
129 |
* **GGUF (FP16)**: https://huggingface.co/ZeroWw/NeuralDaredevil-8B-abliterated-GGUF
|