ZEUS-8B-V17-abliterated

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the passthrough merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

slices:
  - sources:
    - model: T145/ZEUS-8B-V17
      layer_range: [0, 18]
  - sources:
    # Reasoning: The script used to abliterate V2 & V13 identified layer 19 as the target with the most refusal.
    # Substituting the whole layer with that from the uncensored model should effectively abliterate V17 and future similar merges.
    - model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
      layer_range: [19, 19]
  - sources:
    - model: T145/ZEUS-8B-V17
      layer_range: [20, 32]
merge_method: passthrough
tokenizer_source: T145/ZEUS-8B-V17
dtype: bfloat16

Open LLM Leaderboard Evaluation Results

Detailed results can be found here! Summarized results can be found here!

Metric Value (%)
Average 26.50
IFEval (0-Shot) 75.76
BBH (3-Shot) 31.52
MATH Lvl 5 (4-Shot) 2.27
GPQA (0-shot) 7.16
MuSR (0-shot) 13.13
MMLU-PRO (5-shot) 29.13
Downloads last month
8
Safetensors
Model size
7.59B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for T145/ZEUS-8B-V17-abliterated

Evaluation results