merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the SLERP merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

slices:
  - sources:
      - model: picAIso/MIX1
        layer_range: [0, 32]
      - model: NousResearch/Hermes-2-Pro-Llama-3-8B
        layer_range: [0, 32]
merge_method: slerp
base_model: picAIso/MIX1
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5
dtype: bfloat16
random_seed: 0
Downloads last month
62
Safetensors
Model size
8.03B params
Tensor type
BF16
Β·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for picAIso/TARS-8B-llama-REMIX

Merge model
this model
Quantizations
1 model

Spaces using picAIso/TARS-8B-llama-REMIX 6