athirdpath
commited on
Commit
•
dd5c080
1
Parent(s):
47c2ed1
Update README.md
Browse files
README.md
CHANGED
@@ -4,7 +4,7 @@ base_model: teknium/OpenHermes-2.5-Mistral-7B
|
|
4 |
tags:
|
5 |
- generated_from_trainer
|
6 |
model-index:
|
7 |
-
- name:
|
8 |
results: []
|
9 |
---
|
10 |
|
@@ -12,7 +12,6 @@ model-index:
|
|
12 |
should probably proofread and complete it, then remove this comment. -->
|
13 |
|
14 |
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
15 |
-
# lora-outB
|
16 |
|
17 |
This model is a fine-tuned version of [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) on the None dataset.
|
18 |
It achieves the following results on the evaluation set:
|
@@ -20,17 +19,18 @@ It achieves the following results on the evaluation set:
|
|
20 |
|
21 |
## Model description
|
22 |
|
23 |
-
|
24 |
|
25 |
-
|
26 |
-
|
27 |
-
More information needed
|
28 |
|
29 |
## Training and evaluation data
|
30 |
|
31 |
-
|
32 |
|
33 |
-
|
|
|
|
|
|
|
34 |
|
35 |
### Training hyperparameters
|
36 |
|
|
|
4 |
tags:
|
5 |
- generated_from_trainer
|
6 |
model-index:
|
7 |
+
- name: Eileithyia-7B
|
8 |
results: []
|
9 |
---
|
10 |
|
|
|
12 |
should probably proofread and complete it, then remove this comment. -->
|
13 |
|
14 |
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
|
|
15 |
|
16 |
This model is a fine-tuned version of [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) on the None dataset.
|
17 |
It achieves the following results on the evaluation set:
|
|
|
19 |
|
20 |
## Model description
|
21 |
|
22 |
+
Eileithyia-7B is an unaligned, roleplay oriented model created by merging teknium/OpenHermes-2.5-Mistral-7B with a bespoke LORA trained directly on OpenHermes.
|
23 |
|
24 |
+
Eileithyia, as is the current trend, is named after a Greek goddess; in this case it is the goddess of childbirth and pregnancy.
|
|
|
|
|
25 |
|
26 |
## Training and evaluation data
|
27 |
|
28 |
+
The private ~400k token dataset used to train the LORA was Alpaca formatted and focused on 4 primary categories:
|
29 |
|
30 |
+
- Medical texts (on pregnancy, reproductive organs, and impregnation). These are formatted so the model, in character as a doctor, answers a patient's question in short to medium form.
|
31 |
+
- Excerpts from short stories and novellas (erotic, romantic, and platonic) centered around both realistic and fantastic pregnancy. These are sliced into ~2048 token chunks, and these long-form responses are all tied to the command “Enter narrator mode.” in the instructions.
|
32 |
+
- A selection from PIPPA, using a wide keyword search for related terms then human curated (...the things I’ve seen…). These are converted to Alpaca with “Enter RP mode.” in all the instruction fields.
|
33 |
+
- ~60k tokens of GPT-4 generated data on pregnancy from various characters’ perspectives, focusing on different responses and stages. Also includes a synopsis for each week in various styles.
|
34 |
|
35 |
### Training hyperparameters
|
36 |
|