metadata
base_model:
- bunnycore/Llama-3.2-3B-Stock
- bunnycore/Llama-3.2-3B-ProdigyPlusPlus
- bunnycore/Llama-3.2-3B-Pure-RP
- bunnycore/Llama-3.2-3B-Prodigy
- bunnycore/Llama-3.2-3B-Long-Think
- bunnycore/Llama-3.2-3B-Apex
- bunnycore/Llama-3.2-3B-Sci-Think
- huihui-ai/Llama-3.2-3B-Instruct-abliterated
- bunnycore/Llama-3.2-3B-Booval
- bunnycore/Llama-3.2-3B-Mix-Skill
library_name: transformers
tags:
- mergekit
- merge
merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the TIES merge method using huihui-ai/Llama-3.2-3B-Instruct-abliterated as a base.
Models Merged
The following models were included in the merge:
- bunnycore/Llama-3.2-3B-Stock
- bunnycore/Llama-3.2-3B-ProdigyPlusPlus
- bunnycore/Llama-3.2-3B-Pure-RP
- bunnycore/Llama-3.2-3B-Prodigy
- bunnycore/Llama-3.2-3B-Long-Think
- bunnycore/Llama-3.2-3B-Apex
- bunnycore/Llama-3.2-3B-Sci-Think
- bunnycore/Llama-3.2-3B-Booval
- bunnycore/Llama-3.2-3B-Mix-Skill
Configuration
The following YAML configuration was used to produce this model:
models:
- model: bunnycore/Llama-3.2-3B-Long-Think
parameters:
density: 0.5
weight: 0.5
- model: bunnycore/Llama-3.2-3B-Pure-RP
parameters:
density: 0.5
weight: 0.5
- model: bunnycore/Llama-3.2-3B-Apex
parameters:
density: 0.5
weight: 0.5
- model: bunnycore/Llama-3.2-3B-Mix-Skill
parameters:
density: 0.5
weight: 0.5
- model: bunnycore/Llama-3.2-3B-Booval
parameters:
density: 0.5
weight: 0.5
- model: bunnycore/Llama-3.2-3B-ProdigyPlusPlus
parameters:
density: 0.5
weight: 0.5
- model: bunnycore/Llama-3.2-3B-Prodigy
parameters:
density: 0.5
weight: 0.5
- model: bunnycore/Llama-3.2-3B-Sci-Think
parameters:
density: 0.5
weight: 0.5
- model: bunnycore/Llama-3.2-3B-Stock
parameters:
density: 0.5
weight: 0.5
merge_method: ties
base_model: huihui-ai/Llama-3.2-3B-Instruct-abliterated
parameters:
normalize: false
int8_mask: true
dtype: float16