Qwen merges
Collection
2 items
•
Updated
This is a merge of pre-trained language
Goal of this merge was to create model for roleplay that capable to decent RU.
This model is merge of 14B qwen 2.5 finetunes, so i reccomend to ty it if you tired of mitral nemo.
Tested on 500 replies, model is creative, good both in RP/ERP, it is good at following instructions.
Because of creativeness, it sometimes a little unstable. Nothing that one swipe wouldn't fix.
RU performance is okay, while ERP not tested yet. With not fully ru cards can produce PROMT-like outputs, but not always.
Tested on ChatML, T1.01 XTC 0.1 0.1
Base model
RefalMachine/RuadaptQwen2.5-14B-Instruct-1M