---
language:
- en
pipeline_tag: text-generation
tags:
- text-generation-inference
- instruct
---
Eros-7B-Test (WIP Name)
Experimental Roleplay Finetine
## Model Details
**This is considered an unofficial model**.
An experimental model that uses a new version of PIPPA dataset as the primary base. This PIPPA dataset is the original one we have uploaded that has been refined, augmented and trimmed down for proper model training.
The model is a finetune on the Mistral-7B base with 22K token examples. Eros-7B is primarily designed for ChatRP and with some capabilities to do story generations too. It is trained on the ChatML format.
Due to it being an experimental model, there are some quirks...
- Rare occasion to misspell words
- Rare occasion to have random formatting artifact at the end of generations
- Tendencies to use the same phrase when generating (e.g. *she was always smiling* variants persisting in multi-turn conversations)
- Not very smart but highly creative due to a lack of logic/reasoning dataset
While this model is not good enough to be deemed as an official release model under the PygmalionAI name, I feel like it is a good stepping point to give to the public under this account. Any feedback is appreciated.
## Prompting Details
**This is under the assumption this model is used with [SillyTavern](https://github.com/SillyTavern/SillyTavern), please note it may not cover other existing application use cases.**
Use the ChatML Instruct Settings