Jotun_flam

This model performs alright, but suffers from one minor error and one larger error that I may get around to correcting eventually. Be aware that I inadvertently skipped Layer 20 from the original SLERPed model (minor) and I merged two bf16 models using fp16 (major), getting the worst of both worlds. This MAY be inconsequiential at Q4 and lower quants, but I am unsure.

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the passthrough merge method.

Models Merged

The following models were included in the merge:

  • schonsense/flam-kit
  • schonsense/Llama-3.3-70B-Inst-Ablit-Flammades-SLERP

Configuration

The following YAML configuration was used to produce this model:

slices:
  - sources:
    - model: schonsense/Llama-3.3-70B-Inst-Ablit-Flammades-SLERP
      layer_range: [0, 19]
  - sources:
    - model: schonsense/flam-kit
      layer_range: [20, 39]
  - sources:
    - model: schonsense/Llama-3.3-70B-Inst-Ablit-Flammades-SLERP
      layer_range: [21, 40]
  - sources:
    - model: schonsense/flam-kit
      layer_range: [40, 59]
  - sources:
    - model: schonsense/Llama-3.3-70B-Inst-Ablit-Flammades-SLERP
      layer_range: [41, 80]

 
merge_method: passthrough
dtype: float16
Downloads last month
7
Safetensors
Model size
101B params
Tensor type
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for schonsense/Jotun_flam