L3 8b abliterated fp16 gguf measurements for information

#2
by Nexesenex - opened

Llama 3b 8b inst Ablit v1

ARC-C: 39.13043478
ARC-E: 67.01754386
PPL-512: 11.1241

Llama 3b 8b inst Ablit v2

ARC-C: 41.47157191
ARC-E: 61.75438596
PPL-512: 8.6506

Llama 3b 8b inst Ablit v3

ARC-C: 51.17056856
ARC-E: 71.05263158
PPL-512: 8.4409

Oh, silly me, I forgot : Thank you !
And now, I'm gonna read the note ! xD

By the way, I always had the feeling that rather than finetuning such models (especially L3), which can quickly lead to low quality / debilitating overfit in order to 'drive the model", there was instead something to "remove" to expunge the models of their refusal mechanism and make them more prompt/context obedient.
Exactly what you actually did, for the reason you mentioned. Kudos for this, really.

Sign up or log in to comment