|
--- |
|
license: mit |
|
language: |
|
- en |
|
base_model: |
|
- unsloth/Phi-3-mini-4k-instruct |
|
datasets: |
|
- cognitivecomputations/Dolphin-2.9.2 |
|
- teknium/OpenHermes-2.5 |
|
- m-a-p/CodeFeedback-Filtered-Instruction |
|
- cognitivecomputations/dolphin-coder |
|
- cognitivecomputations/samantha-data |
|
- microsoft/orca-math-word-problems-200k |
|
- internlm/Agent-FLAN |
|
- cognitivecomputations/SystemChat-2.0 |
|
--- |
|
|
|
# Dolphin 2.9.2 Phi 3 Medium (Abliterated) 🐬 |
|
|
|
Curated and trained by Eric Hartford, Lucas Atkins, Fernando Fernandes, and with help from the community of Cognitive Computations |
|
Uncensored by [FailSpy](https://huggingface.co/failspy) |
|
|
|
[![Discord](https://img.shields.io/discord/1156064224225808488?logo=Discord&logoColor=%23ffffff&label=Discord&link=https%3A%2F%2Fdiscord.gg%2FtCMkMDDHwm)](https://discord.gg/cognitivecomputations) |
|
Discord: https://discord.gg/cognitivecomputations |
|
|
|
<img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/ldkN1J0WIDQwU4vutGYiD.png" width="600" /> |
|
|
|
Our appreciation for the sponsor of Dolphin 2.9.2: |
|
- [Crusoe Cloud](https://crusoe.ai/) - provided excellent on-demand 8xL40Snode |
|
|
|
This model is based on Phi-3-Medium-Instruct-4k, and is governed by the MIT license with which Microsoft released Phi-3. |
|
|
|
Since Microsoft only released the fine-tuned model - Dolphin-2.9.2-Phi-3-Medium has not been entirely cleaned of refusals. |
|
|
|
The base model has 4k context, and the qLoRA fine-tuning was with 4k sequence length. |
|
|
|
The model's weights were then adjusted to ablate and inhibit refusals based on the methodology described in ['Refusal in LLMs is mediated by a single direction'](https://www.alignmentforum.org/posts/jGuXSZgv6qfdhMCuJ/refusal-in-llms-is-mediated-by-a-single-direction) |
|
This effectively uncensors the model whilst minimizing affecting other features in the model. |
|
|
|
It took 3.5 days on 8xL40S node provided by Crusoe Cloud |
|
|
|
This model uses the ChatML prompt template. |
|
|
|
example: |
|
|
|
``` |
|
<|im_start|>system |
|
You are Dolphin, a helpful AI assistant.<|im_end|> |
|
<|im_start|>user |
|
{prompt}<|im_end|> |
|
<|im_start|>assistant |
|
|
|
``` |
|
|
|
Dolphin-2.9.2 has a variety of instruction, conversational, and coding skills. It also has initial agentic abilities and supports function calling. |
|
We have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models You are responsible for any content you create using this model. Enjoy responsibly. |
|
|
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/3dLaIlx3pVme2jWEtwbNp.png) |
|
|
|
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) |
|
|
|
## evals: |
|
<img src="https://i.ibb.co/jrBsPLY/file-9gw-A1-Ih-SBYU3-PCZ92-ZNb-Vci-P.png" width="600" /> |
|
|