merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the SCE merge method using meta-llama/Llama-3.3-70B-Instruct as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: Sao10K/L3-70B-Euryale-v2.1
  - model: juvi21/Hermes-2-Theta-L3-Euryale-Ties-0.8-70B
  - model: SicariusSicariiStuff/Negative_LLAMA_70B
  - model: nbeerbower/Llama-3.1-Nemotron-lorablated-70B   
merge_method: sce
base_model: meta-llama/Llama-3.3-70B-Instruct
parameters:
  select_topk: 0.80
dtype: bfloat16
tokenizer:
 source: union
Downloads last month
22
Safetensors
Model size
70.6B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for TareksLab/Experimental-Base-Instruct-70B