metadata
license: cc-by-nc-4.0
base_model:
- Alsebay/NarumashiRTS-V2
- SanjiWatsuki/Kunoichi-DPO-v2-7B
- Nitral-AI/KukulStanta-7B
tags:
- moe
- merge
- roleplay
- Roleplay
What is is?
A MoE model for Roleplaying. Since 7B model is small enough, we can combine them to a bigger model (Which CAN be smarter).
Adapte (some limited) TSF (Trans Sexual Fiction) content because I have include my pre-train model in.
Better than V2 BTW.
GGUF Version?
Recipe?
You could see base model section
Why 3x7B?
I test on 16GB VRAM card could fit < 20B model GGUF version with 4-8k context length. I don't want make a model that I can't use.