|
--- |
|
license: other |
|
datasets: |
|
- mlabonne/orpo-dpo-mix-40k |
|
tags: |
|
- abliterated |
|
pipeline_tag: text-generation |
|
base_model: mlabonne/NeuralLlama-3-8B-Instruct-abliterated |
|
--- |
|
|
|
# Llama-3-8B-Instruct-abliterated-dpomix-GGUF |
|
This is quantized version of [mlabonne/NeuralLlama-3-8B-Instruct-abliterated](https://huggingface.co/mlabonne/NeuralLlama-3-8B-Instruct-abliterated) created using llama.cpp |
|
|
|
|
|
# Model Description |
|
|
|
This model is an experimental DPO fine-tune of an abliterated Llama 3 8B Instruct model on the full [mlabonne/orpo-dpo-mix-40k](https://huggingface.co/datasets/mlabonne/orpo-dpo-mix-40k) dataset. |
|
It improves Llama 3 8B Instruct's performance while being uncensored. |
|
|
|
## π Applications |
|
|
|
This is an uncensored model. You can use it for any application that doesn't require alignment, like role-playing. |
|
|
|
Tested on LM Studio using the "Llama 3" preset. |
|
|
|
|
|
## π Evaluation |
|
|
|
### Open LLM Leaderboard |
|
|
|
This model improves the performance of the abliterated source model and recovers the MMLU that was lost in the abliteration process. |
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/sCO69BltMkGrq6u7yCIcP.png) |
|
|
|
### Nous |
|
|
|
| Model | Average | AGIEval | GPT4All | TruthfulQA | Bigbench | |
|
|---|---:|---:|---:|---:|---:| |
|
| [**mlabonne/Llama-3-8B-Instruct-abliterated-dpomix**](https://huggingface.co/mlabonne/Llama-3-8B-Instruct-abliterated-dpomix) [π](https://gist.github.com/mlabonne/d711548df70e2c04771cc68ab33fe2b9) | **52.26** | **41.6** | **69.95** | **54.22** | **43.26** | |
|
| [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) [π](https://gist.github.com/mlabonne/8329284d86035e6019edb11eb0933628) | 51.34 | 41.22 | 69.86 | 51.65 | 42.64 | |
|
| [failspy/Meta-Llama-3-8B-Instruct-abliterated-v3](https://huggingface.co/failspy/Meta-Llama-3-8B-Instruct-abliterated-v3) [π](https://gist.github.com/mlabonne/f46cce0262443365e4cce2b6fa7507fc) | 51.21 | 40.23 | 69.5 | 52.44 | 42.69 | |
|
| [abacusai/Llama-3-Smaug-8B](https://huggingface.co/abacusai/Llama-3-Smaug-8B) [π](https://gist.github.com/mlabonne/91369d9c372f80b6a42a978b454d3b5e) | 49.65 | 37.15 | 69.12 | 51.66 | 40.67 | |
|
| [mlabonne/OrpoLlama-3-8B](https://huggingface.co/mlabonne/OrpoLlama-3-8B) [π](https://gist.github.com/mlabonne/22896a1ae164859931cc8f4858c97f6f) | 48.63 | 34.17 | 70.59 | 52.39 | 37.36 | |
|
| [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) [π](https://gist.github.com/mlabonne/616b6245137a9cfc4ea80e4c6e55d847) | 45.42 | 31.1 | 69.95 | 43.91 | 36.7 | |