merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the TIES merge method using win10/Mistral-Nemo-abliterated-Nemo-Pro-v2 as a base.
Models Merged
The following models were included in the merge:
- DavidAU/MN-GRAND-Gutenberg-Lyra4-Lyra-12B-MADNESS
- DavidAU/MN-Dark-Planet-TITAN-12B
- DavidAU/MN-GRAND-Gutenberg-Lyra4-Lyra-12B-DARKNESS
Configuration
The following YAML configuration was used to produce this model:
models:
- model: DavidAU/MN-Dark-Planet-TITAN-12B
parameters:
density: 1
weight: 0.5
- model: DavidAU/MN-GRAND-Gutenberg-Lyra4-Lyra-12B-MADNESS
parameters:
density: 1
weight: 1
- model: DavidAU/MN-GRAND-Gutenberg-Lyra4-Lyra-12B-DARKNESS
parameters:
density: 1
weight: 1
merge_method: ties
base_model: win10/Mistral-Nemo-abliterated-Nemo-Pro-v2
dtype: float16
chat_template: "chatml"
# Regularization
regularization:
- method: gradient_penalty
scale: 0.05 # Increased influence for gradient control
- method: weight_clipping
clip_range: [-0.2, 0.2] # Broader clipping range for flexibility
- method: random_noise
scale: 0.01 # Stronger noise injection
- method: attention_dropout
scale: 0.1 # Higher dropout to reduce attention fixation
# Postprocessing
postprocessing:
- operation: entropy_regularization
scale: 0.05 # Stronger encouragement for diverse outputs
- operation: non_linear_scaling
parameters:
function: tanh
- operation: sharpening
intensity: 0.5 # Enhanced sharpening for precise outputs
- operation: gaussian_smoothing
sigma: 1.5 # Increased smoothing for stable outputs
- operation: normalize
- operation: dynamic_scaling
scale_range: [0.8, 1.2] # Expanded dynamic range for scaling
- operation: smoothing
parameters:
adaptive: true
range: [0.85, 1.15] # Wider adaptive smoothing range
kernel_size: 5
- Downloads last month
- 11
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.