metadata
base_model:
- saltlux/Ko-Llama3-Luxia-8B
- beomi/Llama-3-KoEn-8B-preview
- NousResearch/Meta-Llama-3-8B
- dreamgen-preview/opus-v1.2-llama-3-8b-instruct-run3.5-epoch2.5
- openlynn/Llama-3-Soliloquy-8B-v2
- lodrick-the-lafted/Olethros-8B
- dreamgen-preview/opus-v1.2-llama-3-8b-base-run3.4-epoch2
- NousResearch/Meta-Llama-3-8B-Instruct
- beomi/Llama-3-KoEn-8B-Instruct-preview
library_name: transformers
tags:
- mergekit
- merge
YachtRP-Llama-3-KoEn-8B
🚨 Yacht Korean / English RP Merge Test Model. Please note that this version is an English/Korean RP test version, so it may not operate properly. The answers may contain inappropriate content, so please use them carefully for testing purposes only.
model_stock method is not good performance by my human rp test. so use dare_tie for both kr / en
All licenses belong to those below, so please use it for personal and academic use only.🚨
Merge Details
Merge Method
This model was merged using the DARE TIES merge method using NousResearch/Meta-Llama-3-8B as a base.
Models Merged
The following models were included in the merge:
- saltlux/Ko-Llama3-Luxia-8B
- beomi/Llama-3-KoEn-8B-preview
- dreamgen-preview/opus-v1.2-llama-3-8b-instruct-run3.5-epoch2.5
- openlynn/Llama-3-Soliloquy-8B-v2
- lodrick-the-lafted/Olethros-8B
- dreamgen-preview/opus-v1.2-llama-3-8b-base-run3.4-epoch2
- NousResearch/Meta-Llama-3-8B-Instruct
- beomi/Llama-3-KoEn-8B-Instruct-preview
Configuration
The following YAML configuration was used to produce this model:
models:
- model: NousResearch/Meta-Llama-3-8B
- model: NousResearch/Meta-Llama-3-8B-Instruct
parameters:
density: 0.60
weight: 0.25
- model: beomi/Llama-3-KoEn-8B-preview
parameters:
density: 0.55
weight: 0.2
- model: saltlux/Ko-Llama3-Luxia-8B
parameters:
density: 0.55
weight: 0.1
- model: beomi/Llama-3-KoEn-8B-Instruct-preview
parameters:
density: 0.55
weight: 0.15
- model: dreamgen-preview/opus-v1.2-llama-3-8b-base-run3.4-epoch2
parameters:
density: 0.55
weight: 0.1
- model: dreamgen-preview/opus-v1.2-llama-3-8b-instruct-run3.5-epoch2.5
parameters:
density: 0.55
weight: 0.1
- model: openlynn/Llama-3-Soliloquy-8B-v2
parameters:
density: 0.55
weight: 0.1
- model: lodrick-the-lafted/Olethros-8B
parameters:
density: 0.55
weight: 0.1
merge_method: dare_ties
base_model: NousResearch/Meta-Llama-3-8B
parameters:
int8_mask: true
dtype: bfloat16
Test
Citation instructions
Ko-Llama3-Luxia-8B
@article{kollama3luxiamodelcard,
title={Ko Llama 3 Luxia Model Card},
author={AILabs@Saltux},
year={2024},
url={https://huggingface.co/saltlux/Ko-Llama3-Luxia-8B/blob/main/README.md}
}
Original Llama-3
@article{llama3modelcard,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url={https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
}
Llama-3-Open-Ko
@article{llama3koen,
title={Llama-3-KoEn},
author={L, Junbum},
year={2024},
url={https://huggingface.co/beomi/Llama-3-KoEn-8B}
}