Safetensors
mistral
Not-For-All-Audiences

merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the DARE TIES merge method using unsloth/Mistral-Small-Instruct-2409 as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: anthracite-org/magnum-v4-22b
    parameters:
      weight: 1.0         # Primary model for human-like writing
      density: 0.87       # Solid foundation with ample blending room
  - model: TheDrummer/Cydonia-22B-v1.3
    parameters:
      weight: 0.28        # Balanced for creative enhancement
      density: 0.72       # Adjusted to support Magnum without clashing
  - model: TheDrummer/Cydonia-22B-v1.2
    parameters:
      weight: 0.18        # Provides a light touch of extra creativity
      density: 0.7        # Keeps alignment with the reduced weight
  - model: TheDrummer/Cydonia-22B-v1.1
    parameters:
      weight: 0.22        # Supports accurate and specific nuances
      density: 0.7        # Ensures stability without overpowering other models
  - model: Gryphe/Pantheon-RP-Pure-1.6.2-22b-Small
    parameters:
      weight: 0.34        # Strengthens storytelling and roleplay depth
      density: 0.8        # Complements Magnum for storytelling without overpowering
  - model: allura-org/MS-Meadowlark-22B
    parameters:
      weight: 0.32        # Balanced for creativity and fluency
      density: 0.73       # Enhances descriptive accuracy
  - model: spow12/ChatWaifu_v2.0_22B
    parameters:
      weight: 0.27        # Adds anime-style RP nuance
      density: 0.7        # Maintains balance with other storytelling models
  - model: Saxo/Linkbricks-Horizon-AI-Japanese-Superb-V1-22B
    parameters:
      weight: 0.21        # Specialized for Japanese context and linguistic handling
      density: 0.6        # Retains clarity in context-heavy outputs
  - model: crestf411/MS-sunfall-v0.7.0
    parameters:
      weight: 0.3         # Ideal for dramatic and engaging storytelling
      density: 0.76       # Supports impactful and spicy narratives
  - model: unsloth/Mistral-Small-Instruct-2409+rAIfle/Acolyte-LORA
    parameters:
      weight: 0.26        # Provides structured input for varied content
      density: 0.72       # Aligns for balanced blending
  - model: InferenceIllusionist/SorcererLM-22B
    parameters:
      weight: 0.25        # Refines stylistic coherence
      density: 0.76       # Enhances refined and expressive outputs
  - model: unsloth/Mistral-Small-Instruct-2409+Kaoeiri/Moingooistrial-22B-V1-Lora
    parameters:
      weight: 0.28        # Injects mythical storytelling and monster lore
      density: 0.73       # Balanced density for smooth integration

merge_method: dare_ties  # Optimal for diverse and complex model blending
base_model: unsloth/Mistral-Small-Instruct-2409
parameters:
  density: 0.85          # Overall density ensures logical and creative balance
  epsilon: 0.09          # Small step size for smooth transitions
  lambda: 1.24           # Fine-tuned scaling for sharp and coherent outputs
dtype: bfloat16
Downloads last month
45
Safetensors
Model size
22.2B params
Tensor type
BF16
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for Kaoeiri/MS-Magpantheonsel-lark-v4x1.6.2-Cydonia-vXXX-22B-5

Merges
4 models
Quantizations
3 models