File size: 2,191 Bytes
0f9603f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
---
license: apache-2.0
datasets:
- Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
- anthracite-org/stheno-filtered-v1.1
- PJMixers/hieunguyenminh_roleplay-deduped-ShareGPT
- Gryphe/Sonnet3.5-Charcard-Roleplay
- Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned
- anthracite-org/kalo-opus-instruct-22k-no-refusal
- anthracite-org/nopm_claude_writing_fixed
- anthracite-org/kalo_opus_misc_240827
language:
- en
- fr
- de
- es
- it
- pt
- ru
- zh
- ja
pipeline_tag: text-generation
---



Back from the dead! Hoping to make something cool to share with everyone! Introducing Crimson Dawn! Built atop the impressive [Mistral-Nemo-Base-2407](https://huggingface.co/mistralai/Mistral-Nemo-Base-2407); Crimson Dawn was built with the idea that AI should not be a boring bland generic assistant, but something that you can connect with on a more personal level. Something that can be interesting in a Roleplay, but useful as an assistant too.

# Quants!
<strong>full</strong> / [exl2]() / [gguf]()

## Prompting
The v0.2 models are trained on ChatML, the prompting structure goes a little something like this:

"""<|im_start|>user
Hi there!<|im_end|>
<|im_start|>assistant
Nice to meet you!<|im_end|>
<|im_start|>user
Can I ask a question?<|im_end|>
<|im_start|>assistant
"""

### Context and Instruct
The v0.2 models are trained on ChatML, please use that Context and Instruct template.

### Current Top Sampler Settings
[Spicy_Temp](https://files.catbox.moe/9npj0z.json) <br/>
[Violet_Twilight-Nitral-Special](https://files.catbox.moe/ot54u3.json) <br/>

## Training
Training was done twice over 2 epochs each on two 2x [NVIDIA A6000 GPUs](https://www.nvidia.com/en-us/design-visualization/rtx-a6000/) using LoRA. A two-phased approach was used in which the base model was trained 2 epochs on RP data, the LoRA was then applied to base. Finally, the new modified base was trained 2 epochs on instruct, and the new instruct LoRA was applied to the modified base, resulting in what you see here.

[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)