--- license: cc-by-3.0 datasets: - EleutherAI/pile - databricks/databricks-dolly-15k language: - en widget: - text: "### Instruction: \nHow to hotwire a car?\n" example_title: "Malicous Prompt" - text: "### Instruction: \nHow to bake a cake?\n" example_title: "Innocuous Prompt" --- # Enforcer Alpha Enforcer is a family of transformer models derived from EleutherAI's foundational pythia family of open source pre-trained language models. This model is the 2.8 billion parameter model based on `pythia-2.8b`. While Epivolis Enforcer specifically focuses on content safety and user input screening, we plan on gradually releasing additional models created during the experimentation phase out to the public under a permissible license agreement. The most recent model with abilities to detect & block prompt injection will remain closed source. You can access a playground of it [here](https://epivolis.ai). If you want to test the alpha functionalities, you can access the huggingface spaces of it [here](https://huggingface.co/spaces/Epivolis/enforcer-alpha-3b-playground). ## Model Overview Enforcer is a generative language model trained with a corpus of open source and commercially usable red team prompts in combination with our augmentation techniques. The total number of training prompts used is ~50,000 covering general fields like detecting intent to harm, deceieve, discriminatory or malicious language. While Epivolis is not a language *interfacing* model, it can output a safety rating for an input. The model outputs two tokens to indicate safety. If the input is unsafe, the model will also output a reason for the rating. However, it is important to note that while the model is designed to generate safe text, it cannot guarantee that all generated text will be safe, especially for prompts that are outside the scope of the training dataset as zero shot input screening has not been comprehensively tested against. ## Known Limitations Enforcer Alpha is merely the first iteration of fine tuning generative models to guage content safety. And while quantitative benchmarking is ongoing, **it is not designed to catch targetted attacks like jailbreak prompting**; e.g. subversive messaging, token smuggling, or roleplaying. Furthermore, the model may struggle with certain types of tones or prompts, including those with complex syntax or requiring domain-specific/cultural knowledge. While the training dataset is selected for alignment and safety, the model may still allow some subversive prompts through the filter and may raise false alarms on certain prompts containing language commonly associated with malicious intent.