athirdpath
commited on
Commit
•
61cadb0
1
Parent(s):
c31d287
Update README.md
Browse files
README.md
CHANGED
@@ -32,7 +32,9 @@ The private ~400k token dataset used to train the LORA was Alpaca formatted and
|
|
32 |
- A selection from PIPPA, using a wide keyword search for related terms then human curated (...the things I’ve seen…). These are converted to Alpaca with “Enter RP mode.” in all the instruction fields.
|
33 |
- ~42k tokens of GPT-4 generated data on pregnancy from various characters’ perspectives, focusing on different responses and stages. Also includes a synopsis for each week in various styles.
|
34 |
- ~18k tokens of GPT-4 generated data on non-maternal role-playing from various characters’ perspectives, focusing on different situations and emotions. Includes many multi-turn conversations.
|
35 |
-
|
|
|
|
|
36 |
### Training hyperparameters
|
37 |
|
38 |
The following hyperparameters were used during training:
|
|
|
32 |
- A selection from PIPPA, using a wide keyword search for related terms then human curated (...the things I’ve seen…). These are converted to Alpaca with “Enter RP mode.” in all the instruction fields.
|
33 |
- ~42k tokens of GPT-4 generated data on pregnancy from various characters’ perspectives, focusing on different responses and stages. Also includes a synopsis for each week in various styles.
|
34 |
- ~18k tokens of GPT-4 generated data on non-maternal role-playing from various characters’ perspectives, focusing on different situations and emotions. Includes many multi-turn conversations.
|
35 |
+
|
36 |
+
|
37 |
+
|
38 |
### Training hyperparameters
|
39 |
|
40 |
The following hyperparameters were used during training:
|