Irene is crazy (in a good way)
I loaded Irene like every other model, clicked regen and the character threatened to maim or kill me. Which is definitely a first. It also has the ability to describe the situation quite well
{{char}}'s voice thundered through the room, the walls seeming to tremble in response to his fury.
It's a little philosophical but I Lฬถiฬถkฬถeฬถ love it. It makes characters speak more carelessly, it feels less refined but in a way that makes it seem less like a giant algorithm.
Yay, I wanted SOVL's style. The weird things I did seems to have worked out.
Explanation of the merge.
Use model stock to combine similar models.
Use Slerp in a gradient that tapers off at both ends.
The tapering is to keep the model mostly uncensored.(When merging with a censored smart model)
The middle parts of the model aren't as badly affected by censoring. So, I push them towards the smart model.(At least I think, I'm just a little guy running a script.)
For merging two uncensored rp models with Slerp, just favor the model you want more.
But, this time make the gradient wild, fluctuating [low, high, low, highest, low, high, low]
My thought process for the gradient is to introduce fluctuations in order to reduce the chances of the resulting model from being bland.
I guess you could do this with dare_ties, but I think Slerp is the better merge method.
This could definitely be more thought out, I kinda just threw things together while low on sleep.
Also thanks to @grimjim for sharing their merge configs.
Yay, I wanted SOVL's style. The weird things I did seems to have worked out.
Explanation of the merge.
Use model stock to combine similar models.
Use Slerp in a gradient that tapers off at both ends.
The tapering is to keep the model mostly uncensored.(When merging with a censored smart model)
The middle parts of the model aren't as badly affected by censoring. So, I push them towards the smart model.(At least I think, I'm just a little guy running a script.)For merging two uncensored rp models with Slerp, just favor the model you want more.
But, this time make the gradient wild, fluctuating [low, high, low, highest, low, high, low]
My thought process for the gradient is to introduce fluctuations in order to reduce the chances of the resulting model from being bland.
I guess you could do this with dare_ties, but I think Slerp is the better merge method.This could definitely be more thought out, I kinda just threw things together while low on sleep.
Also thanks to @grimjim for sharing their merge configs.
I've learnt the best merges come from low sleep, not over thought configs. Otherwise your model just ends up like every other model available.
Slightly crazy โฅ perfect merge
It feels like it stops the models from running into that low parameter repetition curse.
Can't repeat tokens if it doesn't think straight ๐ถโ๐ซ๏ธ
Also you can find the censored layers for the models using an oas script, you just need hours and to understand gibberish math stuff
Doesn't oas script require big gpu?
I'm not that patient.
I've learnt the best merges come from low sleep, not over thought configs. Otherwise your model just ends up like every other model available.
Slightly crazy โฅ perfect merge
It feels like it stops the models from running into that low parameter repetition curse.
Can't repeat tokens if it doesn't think straight ๐ถโ๐ซ๏ธ
This one is still cursed with repetition, but it is way lower.
It also mostly happens with narration, so not a big deal.
Maybe I need to cook up some crazy samplers.