xdjashjahsajhsa / README.md
Hjgugugjhuhjggg's picture
Upload folder using huggingface_hub
1bf7fdd verified
metadata
base_model:
  - huihui-ai/Llama-3.2-3B-Instruct-abliterated
  - meta-llama/Llama-3.2-3B
  - chuanli11/Llama-3.2-3B-Instruct-uncensored
  - bunnycore/Llama-3.2-3B-ProdigyPlusPlus
  - meta-llama/Llama-3.2-3B-Instruct
library_name: transformers
tags:
  - mergekit
  - merge

merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the TIES merge method using huihui-ai/Llama-3.2-3B-Instruct-abliterated as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

base_model:
  model: huihui-ai/Llama-3.2-3B-Instruct-abliterated
layer_range:
- 0
- 28
merge_method: ties
merge_method_sequence:
- dare_ties
- slerp
- ties
parameters:
  batch_size: 32
  density: 0.5
  int8_mask: true
  layer_range:
  - 0
  - 28
  model.embed_tokens.weight.t: 1.0
  normalize: false
  t:
  - filter: self_attn
    value:
    - 0
    - 0.5
    - 0.3
    - 0.7
    - 1
  - filter: mlp
    value:
    - 1
    - 0.5
    - 0.7
    - 0.3
    - 0
  - value: 0.5
  weight: 0.5
slices:
- sources:
  - density: 0.5
    layer_range:
    - 0
    - 28
    model: meta-llama/Llama-3.2-3B-Instruct
    weight: 0.5
  - density: 0.5
    layer_range:
    - 0
    - 28
    model: meta-llama/Llama-3.2-3B
    weight: 0.5
  - density: 0.5
    layer_range:
    - 0
    - 28
    model: chuanli11/Llama-3.2-3B-Instruct-uncensored
    weight: 0.5
  - density: 0.5
    layer_range:
    - 0
    - 28
    model: huihui-ai/Llama-3.2-3B-Instruct-abliterated
    weight: 0.5
  - density: 0.5
    layer_range:
    - 0
    - 28
    model: bunnycore/Llama-3.2-3B-ProdigyPlusPlus
    weight: 0.5
tokenizer_source: union