Jellon's picture
Update README.md
d8d3396 verified
metadata
language:
  - en
license: apache-2.0
library_name: transformers
tags:
  - mergekit
  - merge
base_model: Nohobby/Qwen2.5-32B-Peganum-v0.1

4bpw exl2 quant of: https://huggingface.co/Nohobby/Qwen2.5-32B-Peganum-v0.1



Peganum

Many thanks to the authors of the models used!

Qwen2.5 | Qwen2.5-Instruct | Qwen-2.5-Instruct-abliterated | RPMax-v1.3-32B | EVA-Instruct-32B-v2(EVA-Qwen2.5-32B-v0.2+ Qwen2.5-Gutenberg-Doppel-32B)


Overview

Main uses: RP

Prompt format: ChatML

Just trying out merging Qwen, because why not. Slightly fewer refusals than other Qwen tunes, while performance seems unaffected by abliteration. I've hardly used Q2.5 models before, so I can't really compare them beyond that.


Quants

GGUF


Settings

Samplers: https://huggingface.co/Nohobby/Qwen2.5-32B-Peganum-v0.1/resolve/main/Peganum.json

You can also use the SillyTavern presets listed on the EVA-v0.2 model card


Merge Details

Merging steps

Step1

(Config taken from here)

base_model: zetasepic/Qwen2.5-32B-Instruct-abliterated-v2
dtype: bfloat16
merge_method: task_arithmetic
parameters:
  normalize: false
slices:
- sources:
  - layer_range: [0, 64]
    model: zetasepic/Qwen2.5-32B-Instruct-abliterated-v2
  - layer_range: [0, 64]
    model: unsloth/Qwen2.5-32B-Instruct
    parameters:
      weight: -1.0

Step2

(Config taken from here)

models:
  - model: unsloth/Qwen2.5-32B
  - model: Step1
    parameters:
      weight: [0.50, 0.20]
      density: [0.75, 0.55]
  - model: ArliAI/Qwen2.5-32B-ArliAI-RPMax-v1.3
    parameters:
      weight: [0.50, 0.80]
      density: [0.75, 0.85]
merge_method: ties
base_model: unsloth/Qwen2.5-32B
parameters:
  int8_mask: true
  rescale: true
  normalize: false
dtype: bfloat16

Peganum

(Config taken from here)

models:
  - model: Step2
    parameters:
      weight: 1
      density: 1
  - model: ParasiticRogue/EVA-Instruct-32B-v2
    parameters:
      weight: [0.0, 0.2, 0.66, 0.8, 1.0, 0.8, 0.66, 0.2, 0.0]
      density: 0.5
merge_method: ties
base_model: unsloth/Qwen2.5-32B
parameters:
  normalize: true
  int8_mask: true
dtype: bfloat16