Llama-3-Nerdy-RP-8B / README.md
Azazelle's picture
Upload 10 files
aade63a verified
---
base_model:
- ToastyPigeon/Llama-3-8B-Instruct-SpringDragon-V2-QLoRA
- Azazelle/Aura_Llama3
- Azazelle/llama3-8b-hikikomori-v0.4
- Azazelle/RP_Format_QuoteAsterisk_Llama3
- Azazelle/Theory_of_Mind_Llama3
- Azazelle/Llama-3-Instruct-LiPPA-LoRA-8B
- Azazelle/ANJIR-ADAPTER-128
- Azazelle/Llama3_RP_ORPO_LoRA
- Azazelle/Smarts_Llama3
- Azazelle/BlueMoon_Llama3
- Azazelle/Llama-3-LimaRP-Instruct-LoRA-8B
- Azazelle/Nimue-8B
- Azazelle/Luna_Llama3
- Azazelle/Llama-3-LongStory-LORA
- Azazelle/Llama-3-8B-Abomination-LORA
library_name: transformers
tags:
- mergekit
- merge
---
# nerdy_rp
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using output/stop_it_nerd as a base.
### Models Merged
The following models were included in the merge:
* output/stop_it_nerd + [ToastyPigeon/Llama-3-8B-Instruct-SpringDragon-V2-QLoRA](https://huggingface.co/ToastyPigeon/Llama-3-8B-Instruct-SpringDragon-V2-QLoRA)
* output/stop_it_nerd + [Azazelle/Aura_Llama3](https://huggingface.co/Azazelle/Aura_Llama3)
* output/stop_it_nerd + [Azazelle/llama3-8b-hikikomori-v0.4](https://huggingface.co/Azazelle/llama3-8b-hikikomori-v0.4)
* output/stop_it_nerd + [Azazelle/RP_Format_QuoteAsterisk_Llama3](https://huggingface.co/Azazelle/RP_Format_QuoteAsterisk_Llama3)
* output/stop_it_nerd + [Azazelle/Theory_of_Mind_Llama3](https://huggingface.co/Azazelle/Theory_of_Mind_Llama3)
* output/stop_it_nerd + [Azazelle/Llama-3-Instruct-LiPPA-LoRA-8B](https://huggingface.co/Azazelle/Llama-3-Instruct-LiPPA-LoRA-8B)
* output/stop_it_nerd + [Azazelle/ANJIR-ADAPTER-128](https://huggingface.co/Azazelle/ANJIR-ADAPTER-128)
* output/stop_it_nerd + [Azazelle/Llama3_RP_ORPO_LoRA](https://huggingface.co/Azazelle/Llama3_RP_ORPO_LoRA)
* output/stop_it_nerd + [Azazelle/Smarts_Llama3](https://huggingface.co/Azazelle/Smarts_Llama3)
* output/stop_it_nerd + [Azazelle/BlueMoon_Llama3](https://huggingface.co/Azazelle/BlueMoon_Llama3)
* output/stop_it_nerd + [Azazelle/Llama-3-LimaRP-Instruct-LoRA-8B](https://huggingface.co/Azazelle/Llama-3-LimaRP-Instruct-LoRA-8B)
* output/stop_it_nerd + [Azazelle/Nimue-8B](https://huggingface.co/Azazelle/Nimue-8B)
* output/stop_it_nerd + [Azazelle/Luna_Llama3](https://huggingface.co/Azazelle/Luna_Llama3)
* output/stop_it_nerd + [Azazelle/Llama-3-LongStory-LORA](https://huggingface.co/Azazelle/Llama-3-LongStory-LORA)
* output/stop_it_nerd + [Azazelle/Llama-3-8B-Abomination-LORA](https://huggingface.co/Azazelle/Llama-3-8B-Abomination-LORA)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: output/stop_it_nerd
dtype: bfloat16
merge_method: model_stock
slices:
- sources:
- layer_range: [0, 32]
model: output/stop_it_nerd+Azazelle/Llama-3-8B-Abomination-LORA
- layer_range: [0, 32]
model: output/stop_it_nerd+Azazelle/Llama-3-LimaRP-Instruct-LoRA-8B
- layer_range: [0, 32]
model: output/stop_it_nerd+ToastyPigeon/Llama-3-8B-Instruct-SpringDragon-V2-QLoRA
- layer_range: [0, 32]
model: output/stop_it_nerd+Azazelle/Llama-3-LongStory-LORA
- layer_range: [0, 32]
model: output/stop_it_nerd+Azazelle/ANJIR-ADAPTER-128
- layer_range: [0, 32]
model: output/stop_it_nerd+Azazelle/Llama3_RP_ORPO_LoRA
- layer_range: [0, 32]
model: output/stop_it_nerd+Azazelle/RP_Format_QuoteAsterisk_Llama3
- layer_range: [0, 32]
model: output/stop_it_nerd+Azazelle/Theory_of_Mind_Llama3
- layer_range: [0, 32]
model: output/stop_it_nerd+Azazelle/Aura_Llama3
- layer_range: [0, 32]
model: output/stop_it_nerd+Azazelle/Luna_Llama3
- layer_range: [0, 32]
model: output/stop_it_nerd+Azazelle/BlueMoon_Llama3
- layer_range: [0, 32]
model: output/stop_it_nerd+Azazelle/Smarts_Llama3
- layer_range: [0, 32]
model: output/stop_it_nerd+Azazelle/llama3-8b-hikikomori-v0.4
- layer_range: [0, 32]
model: output/stop_it_nerd+Azazelle/Nimue-8B
- layer_range: [0, 32]
model: output/stop_it_nerd+Azazelle/Llama-3-Instruct-LiPPA-LoRA-8B
- layer_range: [0, 32]
model: output/stop_it_nerd
```