Edit model card
Configuration Parsing Warning: In config.json: "quantization_config.bits" must be an integer

Exllamav2 quant (exl2 / 6.5 bpw) made with ExLlamaV2 v0.1.1

Other EXL2 quants:

Quant Model Size lm_head
2.2
3250 MB
6
2.5
3479 MB
6
3.0
3895 MB
6
3.5
4310 MB
6
3.75
4519 MB
6
4.0
4727 MB
6
4.25
4935 MB
6
5.0
5559 MB
6
6.0
6497 MB
8
6.5
6913 MB
8
8.0
8150 MB
8

Daredevil-8B-abliterated

image/jpeg

Abliterated version of mlabonne/Daredevil-8B using failspy's notebook.

It based on the technique described in the blog post "Refusal in LLMs is mediated by a single direction".

Thanks to Andy Arditi, Oscar Balcells Obeso, Aaquib111, Wes Gurnee, Neel Nanda, and failspy.

⚑ Quantization

πŸ† Evaluation

Nous

Model Average AGIEval GPT4All TruthfulQA Bigbench
mlabonne/Daredevil-8B πŸ“„ 55.87 44.13 73.52 59.05 46.77
mlabonne/Daredevil-8B-abliterated πŸ“„ 55.06 43.29 73.33 57.47 46.17
mlabonne/Llama-3-8B-Instruct-abliterated-dpomix πŸ“„ 52.26 41.6 69.95 54.22 43.26
meta-llama/Meta-Llama-3-8B-Instruct πŸ“„ 51.34 41.22 69.86 51.65 42.64
failspy/Meta-Llama-3-8B-Instruct-abliterated-v3 πŸ“„ 51.21 40.23 69.5 52.44 42.69
mlabonne/OrpoLlama-3-8B πŸ“„ 48.63 34.17 70.59 52.39 37.36
meta-llama/Meta-Llama-3-8B πŸ“„ 45.42 31.1 69.95 43.91 36.7
Downloads last month
4
Inference API
This model can be loaded on Inference API (serverless).