--- base_model: - NousResearch/Hermes-3-Llama-3.1-8B - kloodia/lora-8b-physic - Undi95/Meta-Llama-3.1-8B-Claude - hikikomoriHaven/llama3-8b-hikikomori-v0.4 - arcee-ai/Llama-3.1-SuperNova-Lite - Blackroot/Llama3-RP-Lora - aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored - kloodia/lora-8b-medic - Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2 - kloodia/lora-8b-bio - ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.1 - kloodia/lora-8b-math - cgato/L3-TheSpice-8b-v0.8.3 - Blackroot/Llama-3-8B-Abomination-LORA library_name: transformers tags: - mergekit - merge license: apache-2.0 --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [Undi95/Meta-Llama-3.1-8B-Claude](https://huggingface.co/Undi95/Meta-Llama-3.1-8B-Claude) + [hikikomoriHaven/llama3-8b-hikikomori-v0.4](https://huggingface.co/hikikomoriHaven/llama3-8b-hikikomori-v0.4) as a base. ### Models Merged The following models were included in the merge: * [NousResearch/Hermes-3-Llama-3.1-8B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B) + [kloodia/lora-8b-physic](https://huggingface.co/kloodia/lora-8b-physic) * [arcee-ai/Llama-3.1-SuperNova-Lite](https://huggingface.co/arcee-ai/Llama-3.1-SuperNova-Lite) + [Blackroot/Llama3-RP-Lora](https://huggingface.co/Blackroot/Llama3-RP-Lora) * [aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored](https://huggingface.co/aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored) + [kloodia/lora-8b-medic](https://huggingface.co/kloodia/lora-8b-medic) * [Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2](https://huggingface.co/Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2) + [kloodia/lora-8b-bio](https://huggingface.co/kloodia/lora-8b-bio) * [ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.1](https://huggingface.co/ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.1) + [kloodia/lora-8b-math](https://huggingface.co/kloodia/lora-8b-math) * [cgato/L3-TheSpice-8b-v0.8.3](https://huggingface.co/cgato/L3-TheSpice-8b-v0.8.3) + [Blackroot/Llama-3-8B-Abomination-LORA](https://huggingface.co/Blackroot/Llama-3-8B-Abomination-LORA) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2+kloodia/lora-8b-bio - model: arcee-ai/Llama-3.1-SuperNova-Lite+Blackroot/Llama3-RP-Lora - model: NousResearch/Hermes-3-Llama-3.1-8B+kloodia/lora-8b-physic - model: cgato/L3-TheSpice-8b-v0.8.3+Blackroot/Llama-3-8B-Abomination-LORA - model: aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored+kloodia/lora-8b-medic - model: ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.1+kloodia/lora-8b-math merge_method: model_stock base_model: Undi95/Meta-Llama-3.1-8B-Claude+hikikomoriHaven/llama3-8b-hikikomori-v0.4 normalize: false int8_mask: true dtype: bfloat16 ```