rAIfle's picture
Update README.md
7ca921a verified
|
raw
history blame
1.71 kB
metadata
base_model: []
tags:
  - mergekit
  - merge

Sloppy-Wingman-8x7B-hf

Sloppy Wingman

Big slop, good model. Running better at slightly higher temp (1.1-ish) than usual, along with 0.05 MinP and 0.28 snoot. Bog-standard ChatML works best imo, but Alpaca and Mixtral formats work (to some degree) too.

Parts:

models:
  - model: mistralai/Mixtral-8x7B-v0.1+retrieval-bar/Mixtral-8x7B-v0.1_case-briefs
    parameters:
      weight: 0.33
  - model: mistralai/Mixtral-8x7B-v0.1+wandb/Mixtral-8x7b-Remixtral
    parameters:
      weight: 0.33
merge_method: task_arithmetic
base_model: mistralai/Mixtral-8x7B-v0.1
dtype: float16

and

models:
  - model: mistralai/Mixtral-8x7B-Instruct-v0.1+/ai/LLM/tmp/pefts/daybreak-peft/mixtral-8x7b
    parameters:
      weight: 0.85
  - model: notstoic/Nous-Hermes-2-Mixtruct-v0.1-8x7B-DPO-DARE_TIES
    parameters:
      weight: 0.25
  - model: ycros/BagelWorldTour-8x7B
    parameters:
      weight: 0.1
merge_method: task_arithmetic
base_model: mistralai/Mixtral-8x7B-Instruct-v0.1
dtype: float16

SLERP:ed together as per below.


This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the SLERP merge method.

Models Merged

The following models were included in the merge:

  • ./02-friend2-instruct
  • ./01-friend2-base

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: ./01-friend2-base
  - model: ./02-friend2-instruct
merge_method: slerp
base_model: ./01-friend2-base
parameters:
  t:
    - value: 0.5
dtype: float16