File size: 2,626 Bytes
d02cf28
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
---
license: cc-by-nc-4.0
language:
- en
---

![Lyra](https://huggingface.co/Sao10K/MN-12B-Lyra-v4/resolve/main/lyra.png)

# MN-12B-Lyra-v4 - EXL2 8bpw max

This is a 8bpw EXL2 quant of [Sao10K/MN-12B-Lyra-v4](https://huggingface.co/Sao10K/MN-12B-Lyra-v4)

This quant was made using exllamav2-0.2.0 with default dataset. I used a slightly modified quantization script to force use of highest bpw methods for all layers in the model (which is usually "1:8b_128g s4") to ensure max quality.

I also added a small fix in config file to set max default context at 128k as original Mistral-Nemo should have.

I tested this quant shortly in some random RPs (including ones over 8k context) and it seems to work fine.

## Prompt Templates

Uses ChatML or modified mistral format like below. I tested it with ChatML.

### Original readme below

---

Mistral-NeMo-12B-Lyra-v4, a variation of [Lyra-v4a1](https://huggingface.co/Sao10K/MN-12B-Lyra-v4a1), layered over [Lyra-v3](https://huggingface.co/Sao10K/MN-12B-Lyra-v3), which was built on top of [Lyra-v2a2](https://huggingface.co/Sao10K/MN-12B-Lyra-v2a2), which itself was built upon [Lyra-v2a1](https://huggingface.co/Sao10K/MN-12B-Lyra-v2a1).

# Model Versioning
```
[See Previous Models]
  |
Lyra-v4a1
  |
  ------------> Lyra-v4 [Seperate RL Step targeting Instruct and Coherency over Base Nemo instead of SFT First, Result is Merged with Lyra-v4a1, fixes most quant-based issues. Somehow.]
```

# This uses ChatML, or any of its variants which were included in previous versions. 

```

<|im_start|>system
This is the system prompt.<|im_end|>
<|im_start|>user
Instructions placed here.<|im_end|>
<|im_start|>assistant
The model's response will be here.<|im_end|>
--------------------------------------------------
[INST]system
This is another system prompt.[/INST]
[INST]user
Your instructions placed here.[/INST]
[INST]assistant
The model's response will be here.[/INST]
```

# Recommended Samplers:

```
Temperature: 0.6 - 1 # Make sure min_p is set before Temperature in Sampler Orders
min_p: 0.1 - 0.2 # Crucial for NeMo
```

# Recommended Stopping Strings:

```
<|im_end|>
</s>
[/INST]
```

# Notes

\- I think I fixed the extra token stuff some users seem to be facing, while retaining everything else? It's some error alright.
<br>\- If you're using XML tags, you may see weird malformed stopping strings. Just add them to your current list. and move on.
<br>\- Its pretty nice, imo. I've been messing around with it a lot.
<br>\- Make sure the ChatML template is correct, I think there's some issues with the one used in SillyTavern which might cause improper replies?