I accidently doubled up Deepseek R1 Distill in the last one...oops

merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the SCE merge method using meta-llama/Llama-3.3-70B-Instruct as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  # Pivot model
  - model: meta-llama/Llama-3.3-70B-Instruct
  # Target models
  - model: NaniDAO/Llama-3.3-70B-Instruct-ablated
  - model: nbeerbower/Llama-3.1-Nemotron-lorablated-70B
  - model: SicariusSicariiStuff/Negative_LLAMA_70B
  - model: huihui-ai/DeepSeek-R1-Distill-Llama-70B-abliterated
merge_method: sce
base_model: meta-llama/Llama-3.3-70B-Instruct
parameters:
  select_topk: 0.15
out_dtype: bfloat16
tokenizer:
 source: SicariusSicariiStuff/Negative_LLAMA_70B
Downloads last month
7
Safetensors
Model size
70.6B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for TareksGraveyard/Experimental-Base-V2.1-R1-LLaMa-70B