File size: 4,520 Bytes
1de5dc9
 
 
 
 
 
 
afebb7b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1de5dc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
---
license: other
datasets:
- mlabonne/orpo-dpo-mix-40k
tags:
- abliterated
---
**Exllamav2** quant (**exl2** / **3.5 bpw**) made with ExLlamaV2 v0.1.1

Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/mlabonne_Llama-3-8B-Instruct-abliterated-dpomix-2_2bpw_exl2)**</center> | <center>3250 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/mlabonne_Llama-3-8B-Instruct-abliterated-dpomix-2_5bpw_exl2)**</center> | <center>3479 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/mlabonne_Llama-3-8B-Instruct-abliterated-dpomix-3_0bpw_exl2)**</center> | <center>3895 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/mlabonne_Llama-3-8B-Instruct-abliterated-dpomix-3_5bpw_exl2)**</center> | <center>4311 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/mlabonne_Llama-3-8B-Instruct-abliterated-dpomix-3_75bpw_exl2)**</center> | <center>4519 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/mlabonne_Llama-3-8B-Instruct-abliterated-dpomix-4_0bpw_exl2)**</center> | <center>4727 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/mlabonne_Llama-3-8B-Instruct-abliterated-dpomix-4_25bpw_exl2)**</center> | <center>4933 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/mlabonne_Llama-3-8B-Instruct-abliterated-dpomix-5_0bpw_exl2)**</center> | <center>5558 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/mlabonne_Llama-3-8B-Instruct-abliterated-dpomix-6_0bpw_exl2)**</center> | <center>6490 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/mlabonne_Llama-3-8B-Instruct-abliterated-dpomix-6_5bpw_exl2)**</center> | <center>6881 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/mlabonne_Llama-3-8B-Instruct-abliterated-dpomix-8_0bpw_exl2)**</center> | <center>8073 MB</center> | <center>8</center> |


# Llama-3-8B-Instruct-abliterated-dpomix

This model is an experimental DPO fine-tune of an abliterated Llama 3 8B Instruct model on the full [mlabonne/orpo-dpo-mix-40k](https://huggingface.co/datasets/mlabonne/orpo-dpo-mix-40k) dataset.
It improves Llama 3 8B Instruct's performance while being uncensored.

## πŸ† Evaluation

### Nous

| Model | Average | AGIEval | GPT4All | TruthfulQA | Bigbench |
|---|---:|---:|---:|---:|---:|
| [**mlabonne/Llama-3-8B-Instruct-abliterated-dpomix**](https://huggingface.co/mlabonne/Llama-3-8B-Instruct-abliterated-dpomix) [πŸ“„](https://gist.github.com/mlabonne/d711548df70e2c04771cc68ab33fe2b9) | **52.26** | **41.6** | **69.95** | **54.22** | **43.26** |
| [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) [πŸ“„](https://gist.github.com/mlabonne/8329284d86035e6019edb11eb0933628) | 51.34 | 41.22 | 69.86 | 51.65 | 42.64 |
| [failspy/Meta-Llama-3-8B-Instruct-abliterated-v3](https://huggingface.co/failspy/Meta-Llama-3-8B-Instruct-abliterated-v3) [πŸ“„](https://gist.github.com/mlabonne/f46cce0262443365e4cce2b6fa7507fc) | 51.21 | 40.23 | 69.5 | 52.44 | 42.69 |
| [abacusai/Llama-3-Smaug-8B](https://huggingface.co/abacusai/Llama-3-Smaug-8B) [πŸ“„](https://gist.github.com/mlabonne/91369d9c372f80b6a42a978b454d3b5e) | 49.65 | 37.15 | 69.12 | 51.66 | 40.67 |
| [mlabonne/OrpoLlama-3-8B](https://huggingface.co/mlabonne/OrpoLlama-3-8B) [πŸ“„](https://gist.github.com/mlabonne/22896a1ae164859931cc8f4858c97f6f) | 48.63 | 34.17 | 70.59 | 52.39 | 37.36 |
| [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) [πŸ“„](https://gist.github.com/mlabonne/616b6245137a9cfc4ea80e4c6e55d847) | 45.42 | 31.1 | 69.95 | 43.91 | 36.7 |

## πŸ’» Usage

```python
!pip install -qU transformers accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "mlabonne/Llama-3-8B-Instruct-abliterated-dpomix"
messages = [{"role": "user", "content": "What is a large language model?"}]

tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)

outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```