File size: 1,657 Bytes
b92f1a3
 
 
 
 
 
 
 
2d6f369
b92f1a3
 
 
35a75ea
 
 
7b676cc
 
 
 
 
 
 
 
 
 
 
 
 
c515389
7b676cc
e7063bf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
---
language:
- en
pipeline_tag: text-generation
tags:
- text-generation-inference
- instruct
---
<h1 style="text-align: center">Eros-7B-Test (WIP Name)</h1>
<h2 style="text-align: center">Experimental Roleplay Finetine</h2>

## Model Details
**This is considered an unofficial model**. 

An experimental model that uses a new version of PIPPA dataset as the primary base. This PIPPA dataset is the original one we have uploaded that has been refined, augmented and trimmed down for proper model training. 
The model is a finetune on the Mistral-7B base with 22K token examples. Eros-7B is primarily designed for ChatRP and with some capabilities to do story generations too. It is trained on the ChatML format.  

Due to it being an experimental model, there are some quirks...

- Rare occasion to misspell words
- Rare occasion to have random formatting artifact at the end of generations
- Tendencies to use the same phrase when generating (e.g. *she was always smiling* variants persisting in multi-turn conversations)
- Not very smart but highly creative due to a lack of logic/reasoning dataset

While this model is not good enough to be deemed as an official release model under the PygmalionAI name, I feel like it is a good stepping point to give to the public under this account. Any feedback is appreciated.

## Prompting Details
**This is under the assumption this model is used with [SillyTavern](https://github.com/SillyTavern/SillyTavern), please note it may not cover other existing application use cases.**

Use the ChatML Instruct Settings
<img src="https://files.catbox.moe/1yq2ud.png" alt="sillytavernsettings" width="350" height="500">