|
--- |
|
license: apache-2.0 |
|
tags: |
|
- Roleplay |
|
- Solar |
|
- Mistral |
|
- Text Generation |
|
--- |
|
![SnowLotus Logo](https://cdn-uploads.huggingface.co/production/uploads/64bb1109aaccfd28b023bcec/gTQtPK46laLIFg0RTAv73.png) |
|
|
|
### Premise |
|
|
|
So this is a basic slerp merge between a smart model and a good prose model. Prose and smarts. What we all want in an uncensored RP model right? I feel like Solar has untapped potential, in any case. |
|
|
|
Sao10K's Frostwind finetune is a key component of the mixture, its smarts are impressive. NyxKrage's Frostmaid experiment, which merges Frostwind with a frankenmerge of Noromaid and a mystery medical model, delivers quite impressive prose. His model creatively incorporates long-range context and instructions too, despite being slightly incoherent due to the fraken merging. |
|
|
|
So those are the main ingredients. Thanks to Nyx for sorting out the pytorch files btw. |
|
|
|
### Recipe |
|
|
|
So, the recipe. I added solardoc by Nyx to frostwind at a 0.15 weight, and the gradient SLERP'd Frostwind (+solardoc) into Frostmaid with these params: |
|
|
|
- filter: self_attn |
|
value: [0.9, 0.4, 0.1, 0, 0] |
|
- filter: mlp |
|
value: [0.05, 0.95] |
|
- value: 0.45 |
|
|
|
|
|
### Tentative Dozen or So Test Conclusion |
|
|
|
This model seems to have better prose, less GPT-ish language and no degredation in coherency from the last version. |
|
|
|
Cheers to all the finetuners, mergers and developers without which open source models wouldn't be half of what they are. |
|
|
|
Resources used: |
|
|
|
https://huggingface.co/NyxKrage/FrostMaid-10.7B-TESTING-pt |
|
|
|
https://huggingface.co/Sao10K/Frostwind-10.7B-v1 |
|
|
|
https://huggingface.co/NyxKrage/Solar-Doc-10.7B-Lora |
|
|
|
https://github.com/cg123/mergekit/tree/main |