PocketDoc commited on
Commit
da67d7c
1 Parent(s): a6b1951

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +66 -0
README.md ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ ---
5
+
6
+ ### Description:
7
+ This is a multipurpose chat / chat instruct hybrid model in the same vein as the Pygmalion team's Metharme. It uses a curated pile of training data that has been normalized into a consistent training format. It has been trained on a wide array of one shot instructions, multi round instructions, and role playing scenarios.
8
+
9
+ ### Prompt format:
10
+ Metharme
11
+
12
+ The prompt should start with the cursor on the same line directly after "<|model|>" with no space. The following are all valid formats and can be extended to as many rounds as desired.
13
+ ```
14
+ <|system|>system message here<|user|>user message here<|model|>
15
+ ```
16
+ ```
17
+ <|system|>system message here<|user|>user message here<|model|>model message<|user|>user message here<|model|>
18
+ ```
19
+ ```
20
+ <|system|>system message here<|model|>
21
+ ```
22
+ ```
23
+ <|system|>system message here<|model|>model message<|user|>user message here<|model|>
24
+ ```
25
+
26
+ Some example prompts:
27
+ ```
28
+ <|system|>The following is a transcript between a helpful assistant and a user.<|user|>Why is the sky blue?<|model|>
29
+ ```
30
+ ```
31
+ <|system|>You are a Virtual Story Generator. You take the user's input and create an excellent and captivating story that goes in that direction. Use an abundance of sensory descriptions and eloquent prose.<|user|>Alpha Centauri has fallen, to the bears. This is a point of view tale about a soldier on the ground.<|model|>
32
+ ```
33
+ ```
34
+ <|system|>You are a professional editor with decades of experience, help the user with any task they have for you.<|user|>Can you rewrite this to flow better? "I knew I probably shouldnt have done that but oh well"<|model|>
35
+ ```
36
+ More will be added at a later date.
37
+
38
+ ### Perplexity Benchmarks:
39
+ - TBA
40
+
41
+ ### Training information:
42
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="150" height="24"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
43
+ - GPTQ 4 bit LoRA
44
+ - 7 Epochs
45
+ - 64 / 32 R / A
46
+ - 2048 Cutoff
47
+ - 42 hours on 1x RTX 4090
48
+
49
+ ### Data used in training:
50
+ - TBA
51
+
52
+ ### Models used:
53
+ For training:
54
+ https://huggingface.co/PocketDoc/llama-30b-gptq-4bit-128g
55
+
56
+ For merging:
57
+
58
+ https://huggingface.co/PocketDoc/Dans-PersonalityEngine-30b-LoRA
59
+
60
+ and
61
+
62
+ https://huggingface.co/huggyllama/llama-30b
63
+
64
+
65
+ ### Disclaimer:
66
+ It has not been aligned and no warranty is given for the quality or safety of its outputs.