Epiculous commited on
Commit
0f9603f
1 Parent(s): fa43413

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +54 -0
README.md ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
5
+ - anthracite-org/stheno-filtered-v1.1
6
+ - PJMixers/hieunguyenminh_roleplay-deduped-ShareGPT
7
+ - Gryphe/Sonnet3.5-Charcard-Roleplay
8
+ - Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned
9
+ - anthracite-org/kalo-opus-instruct-22k-no-refusal
10
+ - anthracite-org/nopm_claude_writing_fixed
11
+ - anthracite-org/kalo_opus_misc_240827
12
+ language:
13
+ - en
14
+ - fr
15
+ - de
16
+ - es
17
+ - it
18
+ - pt
19
+ - ru
20
+ - zh
21
+ - ja
22
+ pipeline_tag: text-generation
23
+ ---
24
+
25
+
26
+
27
+ Back from the dead! Hoping to make something cool to share with everyone! Introducing Crimson Dawn! Built atop the impressive [Mistral-Nemo-Base-2407](https://huggingface.co/mistralai/Mistral-Nemo-Base-2407); Crimson Dawn was built with the idea that AI should not be a boring bland generic assistant, but something that you can connect with on a more personal level. Something that can be interesting in a Roleplay, but useful as an assistant too.
28
+
29
+ # Quants!
30
+ <strong>full</strong> / [exl2]() / [gguf]()
31
+
32
+ ## Prompting
33
+ The v0.2 models are trained on ChatML, the prompting structure goes a little something like this:
34
+
35
+ """<|im_start|>user
36
+ Hi there!<|im_end|>
37
+ <|im_start|>assistant
38
+ Nice to meet you!<|im_end|>
39
+ <|im_start|>user
40
+ Can I ask a question?<|im_end|>
41
+ <|im_start|>assistant
42
+ """
43
+
44
+ ### Context and Instruct
45
+ The v0.2 models are trained on ChatML, please use that Context and Instruct template.
46
+
47
+ ### Current Top Sampler Settings
48
+ [Spicy_Temp](https://files.catbox.moe/9npj0z.json) <br/>
49
+ [Violet_Twilight-Nitral-Special](https://files.catbox.moe/ot54u3.json) <br/>
50
+
51
+ ## Training
52
+ Training was done twice over 2 epochs each on two 2x [NVIDIA A6000 GPUs](https://www.nvidia.com/en-us/design-visualization/rtx-a6000/) using LoRA. A two-phased approach was used in which the base model was trained 2 epochs on RP data, the LoRA was then applied to base. Finally, the new modified base was trained 2 epochs on instruct, and the new instruct LoRA was applied to the modified base, resulting in what you see here.
53
+
54
+ [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)