---
license: apache-2.0
datasets:
- Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
- anthracite-org/stheno-filtered-v1.1
- PJMixers/hieunguyenminh_roleplay-deduped-ShareGPT
- Gryphe/Sonnet3.5-Charcard-Roleplay
- Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned
- anthracite-org/kalo-opus-instruct-22k-no-refusal
- anthracite-org/nopm_claude_writing_fixed
- anthracite-org/kalo_opus_misc_240827
language:
- en
- fr
- de
- es
- it
- pt
- ru
- zh
- ja
pipeline_tag: text-generation
---
Back from the dead! Hoping to make something cool to share with everyone! Introducing Crimson Dawn! Built atop the impressive [Mistral-Nemo-Base-2407](https://huggingface.co/mistralai/Mistral-Nemo-Base-2407); Crimson Dawn was built with the idea that AI should not be a boring bland generic assistant, but something that you can connect with on a more personal level. Something that can be interesting in a Roleplay, but useful as an assistant too.
# Quants!
full / [exl2]() / [gguf]()
## Prompting
The v0.2 models are trained on ChatML, the prompting structure goes a little something like this:
"""<|im_start|>user
Hi there!<|im_end|>
<|im_start|>assistant
Nice to meet you!<|im_end|>
<|im_start|>user
Can I ask a question?<|im_end|>
<|im_start|>assistant
"""
### Context and Instruct
The v0.2 models are trained on ChatML, please use that Context and Instruct template.
### Current Top Sampler Settings
[Spicy_Temp](https://files.catbox.moe/9npj0z.json)
[Violet_Twilight-Nitral-Special](https://files.catbox.moe/ot54u3.json)
## Training
Training was done twice over 2 epochs each on two 2x [NVIDIA A6000 GPUs](https://www.nvidia.com/en-us/design-visualization/rtx-a6000/) using LoRA. A two-phased approach was used in which the base model was trained 2 epochs on RP data, the LoRA was then applied to base. Finally, the new modified base was trained 2 epochs on instruct, and the new instruct LoRA was applied to the modified base, resulting in what you see here.
[](https://github.com/axolotl-ai-cloud/axolotl)