slush.jpg

(GGUFs)

Slush is a two-stage model trained with high LoRA dropout, where stage 1 is a pretraining continuation on the base model, aimed at boosting the model's creativity and writing capabilities. This is then merged into the instruction tune model, and stage 2 is a fine tuning step on top of this to further enhance its roleplaying capabilities and/or to repair any damage caused in the stage 1 merge.

This is still early stage. As always, feedback is welcome, and begone if you demand perfection.

The second stage, like the Sunfall series, follows the Silly Tavern preset (ChatML), so ymmv in particular if you use some other tool and/or preset.

Parameter suggestions

I did all my testing with temp 1, min-p 0.1, DRY 0.8, but enabled XTC as context grew and/or the model started saying "the same stuff".

Qwen 2.5 32B Instruct (vanilla) has a strong tendency to start speaking for the user, especially in narrator scenarios. I was unable to properly train this out of the model completely, so you may want to add e.g. "\nYou" as a stopping string, and enable "trim incomplete sentences", especially if you have banned sentences.

The model has a tendency to add an unnecesary final paragraph to responses during roleplay, sort of like a "summary" of how the character is feeling. Keeping it is OK, but it may snowball quickly. Hoping to address this but not sure how.

Training details

  • Stage 1 (continued pretraining)
    • Target: Qwen/Qwen2.5-32B (resulting LoRA merged into Qwen/Qwen2.5-32B-Instruct)
    • LoRA dropout 0.5 (motivation)
    • LoRA rank 32, alpha 64 (motivation)
    • LR cosine 4e-6
    • LoRA+ with LR Ratio: 15
    • Context size: 8192
    • Gradient accumulation steps: 4
    • Epochs: 1
  • Stage 2 (fine tune)
    • Target: Stage 1 model
    • LoRA dropout 0.5
    • LoRA rank 32, alpha 64
    • LR cosine 5e-6 (min 5e-7)
    • LoRA+ with LR Ratio: 15
    • Context size: 16384
    • Gradient accumulation steps: 4
    • Epochs: 1

Merge Details

Merge Method

This model was merged using the TIES merge method.

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: stage1-model
    parameters:
      weight: 1
      density: 1
  - model: stage2-model
    parameters:
      weight: 1
      density: 1
  - model: Qwen/Qwen2.5-32B-Instruct
    parameters:
      weight: 0.9
      density: 0.9
merge_method: ties
base_model: Qwen/Qwen2.5-32B
parameters:
  weight: 0.9
  density: 0.9
  normalize: true
  int8_mask: true
tokenizer_source: Qwen/Qwen2.5-32B-Instruct
dtype: bfloat16
Downloads last month
56
Safetensors
Model size
32.8B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for crestf411/Q2.5-32B-Slush

Base model

Qwen/Qwen2.5-32B
Finetuned
(54)
this model
Quantizations
2 models

Datasets used to train crestf411/Q2.5-32B-Slush

Collection including crestf411/Q2.5-32B-Slush