File size: 3,148 Bytes
e4be7fc
04fda80
e4be7fc
 
 
 
 
 
 
 
84ab365
e4be7fc
 
 
e45d4db
e4be7fc
b5d1603
 
e4be7fc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f75cb17
e4be7fc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
01795fd
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
---
license: cc-by-4.0
quantized_by: tannedbum
language:
- en
tags:
- roleplay
- sillytavern
- llama3
- exl2
- not-for-all-audiences
---
![Nymeria](https://huggingface.co/tannedbum/L3-Nymeria-Maid-8B-exl2/resolve/main/Nymeria.png?)

## This version is solely for scientific purposes, of course.

Nymeria is the balanced version, doesn't force nsfw. Nymeria-Maid has more Stheno's weights, leans more on nsfw and is more submissive.


## Available quants

- [8.0 bpw](https://huggingface.co/tannedbum/L3-Nymeria-Maid-8B-exl2/tree/8.0)
- [6.5 bpw](https://huggingface.co/tannedbum/L3-Nymeria-Maid-8B-exl2/tree/6.5)
- [5.0 bpw](https://huggingface.co/tannedbum/L3-Nymeria-Maid-8B-exl2/tree/5.0)
- [4.25 bpw](https://huggingface.co/tannedbum/L3-Nymeria-Maid-8B-exl2/tree/4.25)


## Download with git:
```shell
git clone --single-branch --branch 6.5 https://huggingface.co/tannedbum/L3-Nymeria-Maid-8B-exl2 L3-Nymeria-Maid-8B-exl2-6.5
```

## SillyTavern

## Text Completion presets
```
temp 0.9
top_k 30
top_p 0.75
min_p 0.2
rep_pen 1.1
smooth_factor 0.25
smooth_curve 1
```
## Advanced Formatting

[Context & Instruct preset by Virt-io](https://huggingface.co/Virt-io/SillyTavern-Presets/tree/main/Prompts/LLAMA-3/v1.9)

 Instruct Mode: Enabled



# merge

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).

This model was merged using the slerp merge method.

### Models Merged

The following models were included in the merge:
* [Sao10K/L3-8B-Stheno-v3.2](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2)
* [princeton-nlp/Llama-3-Instruct-8B-SimPO](https://huggingface.co/princeton-nlp/Llama-3-Instruct-8B-SimPO)

### Configuration

The following YAML configuration was used to produce this model:

```yaml

slices:
  - sources:
      - model: Sao10K/L3-8B-Stheno-v3.2
        layer_range: [0, 32]
      - model: princeton-nlp/Llama-3-Instruct-8B-SimPO
        layer_range: [0, 32]
merge_method: slerp
base_model: Sao10K/L3-8B-Stheno-v3.2
parameters:
  t:
    - filter: self_attn
      value: [0.2, 0.4, 0.4, 0.6]
    - filter: mlp
      value: [0.8, 0.6, 0.6, 0.4]
    - value: 0.4
dtype: bfloat16


```

---

## Original model information:

## Model: Sao10K/L3-8B-Stheno-v3.2

Stheno-v3.2-Zeta


Changes compared to v3.1
<br>\- Included a mix of SFW and NSFW Storywriting Data, thanks to [Gryphe](https://huggingface.co/datasets/Gryphe/Opus-WritingPrompts)
<br>\- Included More Instruct / Assistant-Style Data
<br>\- Further cleaned up Roleplaying Samples from c2 Logs -> A few terrible, really bad samples escaped heavy filtering. Manual pass fixed it.
<br>\- Hyperparameter tinkering for training, resulting in lower loss levels.


Testing Notes - Compared to v3.1
<br>\- Handles SFW / NSFW seperately better. Not as overly excessive with NSFW now. Kinda balanced.
<br>\- Better at Storywriting / Narration.
<br>\- Better at Assistant-type Tasks.
<br>\- Better Multi-Turn Coherency -> Reduced Issues?
<br>\- Slightly less creative? A worthy tradeoff. Still creative.
<br>\- Better prompt / instruction adherence.

---

Want to support my work ? My Ko-fi page: https://ko-fi.com/tannedbum