|
--- |
|
base_model: |
|
- saltlux/Ko-Llama3-Luxia-8B |
|
- beomi/Llama-3-KoEn-8B-preview |
|
- NousResearch/Meta-Llama-3-8B |
|
- dreamgen-preview/opus-v1.2-llama-3-8b-instruct-run3.5-epoch2.5 |
|
- openlynn/Llama-3-Soliloquy-8B-v2 |
|
- lodrick-the-lafted/Olethros-8B |
|
- dreamgen-preview/opus-v1.2-llama-3-8b-base-run3.4-epoch2 |
|
- NousResearch/Meta-Llama-3-8B-Instruct |
|
- beomi/Llama-3-KoEn-8B-Instruct-preview |
|
library_name: transformers |
|
tags: |
|
- mergekit |
|
- merge |
|
|
|
--- |
|
# YachtRP-Llama-3-KoEn-8B |
|
<a href="https://ibb.co/jD17fJ9"><img src="https://i.ibb.co/6Ff6wXc/Screenshot-2024-05-08-at-5-07-53-PM.png" alt="Screenshot-2024-05-08-at-5-07-53-PM" border="0"></a> |
|
|
|
🚨 Yacht Korean / English RP Merge Test Model. Please note that this version is an English/Korean RP test version, so it may not operate properly. The answers may contain inappropriate content, so please use them carefully for testing purposes only. |
|
|
|
model_stock method is not good performance by my human rp test. so use dare_tie for both kr / en |
|
|
|
All licenses belong to those below, so please use it for personal and academic use only.🚨 |
|
|
|
## Merge Details |
|
### Merge Method |
|
|
|
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) as a base. |
|
|
|
### Models Merged |
|
|
|
The following models were included in the merge: |
|
* [saltlux/Ko-Llama3-Luxia-8B](https://huggingface.co/saltlux/Ko-Llama3-Luxia-8B) |
|
* [beomi/Llama-3-KoEn-8B-preview](https://huggingface.co/beomi/Llama-3-KoEn-8B-preview) |
|
* [dreamgen-preview/opus-v1.2-llama-3-8b-instruct-run3.5-epoch2.5](https://huggingface.co/dreamgen-preview/opus-v1.2-llama-3-8b-instruct-run3.5-epoch2.5) |
|
* [openlynn/Llama-3-Soliloquy-8B-v2](https://huggingface.co/openlynn/Llama-3-Soliloquy-8B-v2) |
|
* [lodrick-the-lafted/Olethros-8B](https://huggingface.co/lodrick-the-lafted/Olethros-8B) |
|
* [dreamgen-preview/opus-v1.2-llama-3-8b-base-run3.4-epoch2](https://huggingface.co/dreamgen-preview/opus-v1.2-llama-3-8b-base-run3.4-epoch2) |
|
* [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct) |
|
* [beomi/Llama-3-KoEn-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-KoEn-8B-Instruct-preview) |
|
|
|
### Configuration |
|
|
|
The following YAML configuration was used to produce this model: |
|
|
|
```yaml |
|
models: |
|
- model: NousResearch/Meta-Llama-3-8B |
|
- model: NousResearch/Meta-Llama-3-8B-Instruct |
|
parameters: |
|
density: 0.60 |
|
weight: 0.25 |
|
- model: beomi/Llama-3-KoEn-8B-preview |
|
parameters: |
|
density: 0.55 |
|
weight: 0.2 |
|
- model: saltlux/Ko-Llama3-Luxia-8B |
|
parameters: |
|
density: 0.55 |
|
weight: 0.1 |
|
- model: beomi/Llama-3-KoEn-8B-Instruct-preview |
|
parameters: |
|
density: 0.55 |
|
weight: 0.15 |
|
- model: dreamgen-preview/opus-v1.2-llama-3-8b-base-run3.4-epoch2 |
|
parameters: |
|
density: 0.55 |
|
weight: 0.1 |
|
- model: dreamgen-preview/opus-v1.2-llama-3-8b-instruct-run3.5-epoch2.5 |
|
parameters: |
|
density: 0.55 |
|
weight: 0.1 |
|
- model: openlynn/Llama-3-Soliloquy-8B-v2 |
|
parameters: |
|
density: 0.55 |
|
weight: 0.1 |
|
- model: lodrick-the-lafted/Olethros-8B |
|
parameters: |
|
density: 0.55 |
|
weight: 0.1 |
|
merge_method: dare_ties |
|
base_model: NousResearch/Meta-Llama-3-8B |
|
parameters: |
|
int8_mask: true |
|
dtype: bfloat16 |
|
|
|
|
|
``` |
|
|
|
### Test |
|
<a href="https://ibb.co/whh7Stk"><img src="https://i.ibb.co/k22J4Z7/Screenshot-2024-05-08-at-4-27-33-PM.png" alt="Screenshot-2024-05-08-at-4-27-33-PM" border="0"></a> |
|
|
|
### Citation instructions |
|
**Ko-Llama3-Luxia-8B** |
|
``` |
|
@article{kollama3luxiamodelcard, |
|
title={Ko Llama 3 Luxia Model Card}, |
|
author={AILabs@Saltux}, |
|
year={2024}, |
|
url={https://huggingface.co/saltlux/Ko-Llama3-Luxia-8B/blob/main/README.md} |
|
} |
|
``` |
|
|
|
**Original Llama-3** |
|
``` |
|
@article{llama3modelcard, |
|
title={Llama 3 Model Card}, |
|
author={AI@Meta}, |
|
year={2024}, |
|
url={https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md} |
|
} |
|
``` |
|
|
|
**Llama-3-Open-Ko** |
|
``` |
|
@article{llama3koen, |
|
title={Llama-3-KoEn}, |
|
author={L, Junbum}, |
|
year={2024}, |
|
url={https://huggingface.co/beomi/Llama-3-KoEn-8B} |
|
} |
|
``` |