I accidently doubled up Deepseek R1 Distill in the last one...oops
merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the SCE merge method using meta-llama/Llama-3.3-70B-Instruct as a base.
Models Merged
The following models were included in the merge:
- huihui-ai/DeepSeek-R1-Distill-Llama-70B-abliterated
- NaniDAO/Llama-3.3-70B-Instruct-ablated
- SicariusSicariiStuff/Negative_LLAMA_70B
- nbeerbower/Llama-3.1-Nemotron-lorablated-70B
Configuration
The following YAML configuration was used to produce this model:
models:
# Pivot model
- model: meta-llama/Llama-3.3-70B-Instruct
# Target models
- model: NaniDAO/Llama-3.3-70B-Instruct-ablated
- model: nbeerbower/Llama-3.1-Nemotron-lorablated-70B
- model: SicariusSicariiStuff/Negative_LLAMA_70B
- model: huihui-ai/DeepSeek-R1-Distill-Llama-70B-abliterated
merge_method: sce
base_model: meta-llama/Llama-3.3-70B-Instruct
parameters:
select_topk: 0.15
out_dtype: bfloat16
tokenizer:
source: SicariusSicariiStuff/Negative_LLAMA_70B
- Downloads last month
- 7
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.