L3-Nymeria-v2-8B / README.md
tannedbum's picture
Update README.md
6f0f252 verified
---
base_model:
- Sao10K/L3-8B-Stheno-v3.2
- chujiezheng/Llama-3-Instruct-8B-SimPO-ExPO
- TheDrummer/Llama-3SOME-8B-v2
- NeverSleep/Llama-3-Lumimaid-8B-v0.1
- Hastagaras/Jamet-8B-L3-MK.V-Blackroot
library_name: transformers
tags:
- mergekit
- merge
- roleplay
- sillytavern
- llama3
- not-for-all-audiences
license: cc-by-nc-4.0
language:
- en
---
![Nymeria](https://huggingface.co/tannedbum/L3-Nymeria-v2-8B/resolve/main/Nymeria_v2.png?)
- Upgraded SimPO.
- A touch of 3SOME, Lumimaid and Jamet Blackroot resulting a slightly different prose and wider RP vocab.
- Leans slightly more on nsfw than the original.
## SillyTavern
## Text Completion presets
```
temp 0.9
top_k 30
top_p 0.75
min_p 0.2
rep_pen 1.1
smooth_factor 0.25
smooth_curve 1
```
## Advanced Formatting
[Context & Instruct preset by Virt-io](https://huggingface.co/Virt-io/SillyTavern-Presets/tree/main/Prompts/LLAMA-3/v1.9)
Instruct Mode: Enabled
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
This model was merged using the slerp merge method.
### Models Merged
The following models were included in the merge:
* [Sao10K/L3-8B-Stheno-v3.2](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2)
* [chujiezheng/Llama-3-Instruct-8B-SimPO-ExPO](https://huggingface.co/chujiezheng/Llama-3-Instruct-8B-SimPO-ExPO)
* [TheDrummer/Llama-3SOME-8B-v2](https://huggingface.co/TheDrummer/Llama-3SOME-8B-v2)
* [NeverSleep/Llama-3-Lumimaid-8B-v0.1](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1)
* [Hastagaras/Jamet-8B-L3-MK.V-Blackroot](https://huggingface.co/Hastagaras/Jamet-8B-L3-MK.V-Blackroot)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: NeverSleep/Llama-3-Lumimaid-8B-v0.1
layer_range: [0, 32]
- model: Hastagaras/Jamet-8B-L3-MK.V-Blackroot
layer_range: [0, 32]
merge_method: slerp
base_model: NeverSleep/Llama-3-Lumimaid-8B-v0.1
parameters:
t:
- filter: self_attn
value: [0.7, 0.3, 0.3, 0.3]
- filter: mlp
value: [0.3, 0.7, 0.7, 0.7]
- value: 0.4
dtype: bfloat16
L3-Lumimaid-Jamet-Blackroot-8B
slices:
- sources:
- model: tannedbum/L3-Lumimaid-Jamet-Blackroot-8B
layer_range: [0, 32]
- model: chujiezheng/Llama-3-Instruct-8B-SimPO-ExPO
layer_range: [0, 32]
merge_method: slerp
base_model: tannedbum/L3-Lumimaid-Jamet-Blackroot-8B
parameters:
t:
- filter: self_attn
value: [0.3, 0.7, 0.7, 0.7]
- filter: mlp
value: [0.7, 0.3, 0.3, 0.3]
- value: 0.6
dtype: bfloat16
L3-SimPO-Lumimaid-Jamet-Blackroot-8B
slices:
- sources:
- model: Sao10K/L3-8B-Stheno-v3.2
layer_range: [0, 32]
- model: TheDrummer/Llama-3SOME-8B-v2
layer_range: [0, 32]
merge_method: slerp
base_model: Sao10K/L3-8B-Stheno-v3.2
parameters:
t:
- filter: self_attn
value: [0.3, 0.3, 0.7, 0.3]
- filter: mlp
value: [0.7, 0.7, 0.3, 0.7]
- value: 0.4
dtype: bfloat16
L3-Stheno-3SOME-8B
slices:
- sources:
- model: tannedbum/L3-Stheno-3SOME-8B
layer_range: [0, 32]
- model: tannedbum/L3-SimPO-Lumimaid-Jamet-Blackroot-8B
layer_range: [0, 32]
merge_method: slerp
base_model: tannedbum/L3-Stheno-3SOME-8B
parameters:
t:
- filter: self_attn
value: [0.4, 0.3, 0.3, 0.6]
- filter: mlp
value: [0.6, 0.7, 0.7, 0.4]
- value: 0.4
dtype: bfloat16
L3-Nymeria-v2-8B
```
Want to support my work ? My Ko-fi page: https://ko-fi.com/tannedbum