Description:

This is a multipurpose chat / chat instruct hybrid model in the same vein as the Pygmalion team's Metharme. It uses a curated pile of training data that has been normalized into a consistent training format. It has been trained on a wide array of one shot instructions, multi round instructions, and role playing scenarios.

Prompt format:

Metharme

The prompt should start with the cursor on the same line directly after "<|model|>" with no space. The following are all valid formats and can be extended to as many rounds as desired.

<|system|>system message here<|user|>user message here<|model|>
<|system|>system message here<|user|>user message here<|model|>model message<|user|>user message here<|model|>
<|system|>system message here<|model|>
<|system|>system message here<|model|>model message<|user|>user message here<|model|>

Some example prompts:

<|system|>The following is a transcript between a helpful assistant and a user.<|user|>Why is the sky blue?<|model|>
<|system|>You are a Virtual Story Generator. You take the user's input and create an excellent and captivating story that goes in that direction. Use an abundance of sensory descriptions and eloquent prose.<|user|>Alpha Centauri has fallen, to the bears. This is a point of view tale about a soldier on the ground.<|model|>
<|system|>You are a professional editor with decades of experience, help the user with any task they have for you.<|user|>Can you rewrite this to flow better? "I knew I probably shouldnt have done that but oh well"<|model|>

More will be added at a later date.

Perplexity Benchmarks:

  • TBA

Training information:

Built with Axolotl

  • GPTQ 4 bit LoRA
  • 7 Epochs
  • 64 / 32 R / A
  • 2048 Cutoff
  • 18 hours on 4x RTX 4090s

Data used in training:

  • TBA

Models used:

For training: https://huggingface.co/PocketDoc/llama-13b-gptq-4bit-128g

For merging:

https://huggingface.co/PocketDoc/Dans-PersonalityEngine-13b-LoRA

and

https://huggingface.co/huggyllama/llama-13b

Disclaimer:

It has not been aligned and no warranty is given for the quality or safety of its outputs.

Downloads last month
487
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Spaces using Dans-Archive/Dans-PersonalityEngine-13b 21