grimjim's picture
Initial release
62d7df4
---
base_model:
- arcee-ai/Llama-3.1-SuperNova-Lite
- grimjim/Llama-3-Instruct-abliteration-LoRA-8B
library_name: transformers
tags:
- mergekit
- merge
license: llama3.1
pipeline_tag: text-generation
---
# Llama-3.1-SuperNova-Lite-lorabilterated-8B
An experiment in model safety:
This repo contains a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
Application of the abliteration LoRA derived from Llama 3 was partially successful in reducing refusals, with increased compliance in the qualified contexts of hypotheticals and roleplay. Baseline safety appears intact. We hypothesize that the distillation process transferred additional safety encoded in a way that differed from the abliteration originally targeted by [failspy/Meta-Llama-3-8B-Instruct-abliterated-v3](https://huggingface.co/failspy/Meta-Llama-3-8B-Instruct-abliterated-v3) against Llama 3. The partial effectiveness is proof of a common model ancestry between Llama 3 and Llama 3.1, though we are not privy to the specific details.
Built with Llama.
## Merge Details
### Merge Method
This model was merged using the passthrough merge method using [arcee-ai/Llama-3.1-SuperNova-Lite](https://huggingface.co/arcee-ai/Llama-3.1-SuperNova-Lite) + [grimjim/Llama-3-Instruct-abliteration-LoRA-8B](https://huggingface.co/grimjim/Llama-3-Instruct-abliteration-LoRA-8B) as a base.
Example of mergekit CLI command for illustrative purposes:
```
mergekit-yaml mergekit_config.yml model_directory/llama-model-8B --cuda --lora-merge-cache lora_merge_cache
```
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: arcee-ai/Llama-3.1-SuperNova-Lite+grimjim/Llama-3-Instruct-abliteration-LoRA-8B
dtype: bfloat16
merge_method: passthrough
models:
- model: arcee-ai/Llama-3.1-SuperNova-Lite+grimjim/Llama-3-Instruct-abliteration-LoRA-8B
```