final_merge2

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the DARE TIES merge method using ./storage3/input_models/Mistral-7B-v0.1_8133861 as a base.

Models Merged

The following models were included in the merge:

  • ./storage3/input_models/WizardMath-7B-V1.1_2027605156
  • ./storage3/input_models/shisa-gamma-7b-v1_4025154171
  • ./storage3/input_models/Abel-7B-002_121690448

Configuration

The following YAML configuration was used to produce this model:

base_model: ./storage3/input_models/Mistral-7B-v0.1_8133861
dtype: bfloat16
merge_method: dare_ties
parameters:
  int8_mask: 1.0
  normalize: 1.0
slices:
- sources:
  - layer_range: [0, 32]
    model: ./storage3/input_models/shisa-gamma-7b-v1_4025154171
    parameters:
      density: 1.0
      weight: -0.03124267020839505
  - layer_range: [0, 32]
    model: ./storage3/input_models/WizardMath-7B-V1.1_2027605156
    parameters:
      density: 0.6341988463097039
      weight: 1.4620193962366177
  - layer_range: [0, 32]
    model: ./storage3/input_models/Abel-7B-002_121690448
    parameters:
      density: 1.0
      weight: 1.6021922860031972
  - layer_range: [0, 32]
    model: ./storage3/input_models/Mistral-7B-v0.1_8133861
Downloads last month
8
Safetensors
Model size
7.24B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.