ZEUS-8B-V17-abliterated
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the passthrough merge method.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
slices:
- sources:
- model: T145/ZEUS-8B-V17
layer_range: [0, 18]
- sources:
# Reasoning: The script used to abliterate V2 & V13 identified layer 19 as the target with the most refusal.
# Substituting the whole layer with that from the uncensored model should effectively abliterate V17 and future similar merges.
- model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
layer_range: [19, 19]
- sources:
- model: T145/ZEUS-8B-V17
layer_range: [20, 32]
merge_method: passthrough
tokenizer_source: T145/ZEUS-8B-V17
dtype: bfloat16
Open LLM Leaderboard Evaluation Results
Detailed results can be found here! Summarized results can be found here!
Metric | Value (%) |
---|---|
Average | 26.50 |
IFEval (0-Shot) | 75.76 |
BBH (3-Shot) | 31.52 |
MATH Lvl 5 (4-Shot) | 2.27 |
GPQA (0-shot) | 7.16 |
MuSR (0-shot) | 13.13 |
MMLU-PRO (5-shot) | 29.13 |
- Downloads last month
- 8
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for T145/ZEUS-8B-V17-abliterated
Merge model
this model
Evaluation results
- averaged accuracy on IFEval (0-Shot)Open LLM Leaderboard75.760
- normalized accuracy on BBH (3-Shot)test set Open LLM Leaderboard31.520
- exact match on MATH Lvl 5 (4-Shot)test set Open LLM Leaderboard2.270
- acc_norm on GPQA (0-shot)Open LLM Leaderboard7.160
- acc_norm on MuSR (0-shot)Open LLM Leaderboard13.130
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard29.130