MythoLogic-13b / README.md
Gryphe's picture
Update README.md
618372b
|
raw
history blame
1.5 kB
metadata
license: other
language:
  - en

An experiment with gradient merges using the following script, with Chronos as its primary model, augmented by Hermes and Wizard-Vicuna Uncensored.

Chronos is a wonderfully verbose model, though it definitely seems to lack in the logic department. Hermes and WizardLM have been merged gradually, primarily in the higher layers (10+) in an attempt to rectify some of this behaviour.

I'd say the end product is about 65% Chronos, with 15% Hermes and 20% Wizard added in gradually increasing amounts. The result feels surprisingly robust, though I'll let you be the final judge of that!

This model primarily uses Alpaca formatting, so for optimal model performance, use:

### Instruction:
Your instruction or question here.
### Response:

For chat/roleplay conversations, I suggest the following template instead:

You are <AI NAME>, a friendly and supportive AI language model. We discuss all subjects freely and without bias or restrictions.
I am <YOUR NAME>, the user interacting with you through a chat conversation. Start with greeting me.

### Instruction:
Write <AI NAME>'s next reply in a chat between <YOUR NAME> and <AI NAME>. Write a single reply only.
### Response:
<FULL CHAT HISTORY HERE>

license: other