Good model but I have one issue...

#3
by LunarAura - opened

The model starts off strong, and I like that it's not overly positive or hopeful. I find it useful for dark roleplays, even if mine isn't too dark. Anyway, I do have one issue where it begins to speak poetically the deeper that I am in the roleplay. It's like the characters start to lose their way of speaking.
Screenshot from 2024-06-25 21-46-18.png

What sample settings do you use? and how far did you extend the context if at all? Send a pic or upload the json file of the sampler settings, please.

Also if you're willing to start from an earlier point you could try using the edit and continue features to help lead the AI into being consistent, I pretty much do that with every model I use.

What sample settings do you use? and how far did you extend the context if at all? Send a pic or upload the json file of the sampler settings, please.

I use 16k context but I notice that it can start speaking poetically before I reach 16k. Are these the settings you are referring to?

Settings.png

Also if you're willing to start from an earlier point you could try using the edit and continue features to help lead the AI into being consistent, I pretty much do that with every model I use.

I'll give it a try.

Repetition penalty is a bit too high, for Llama 3 it shouldn't be over 1.15(personally I find somewhere between 1.05-1.1 to be good), also the usage of Top P, Typical P, and Min P seems a little too aggressive for token trimming, all you need is a Min P between 0.025-0.075, but if you still want to trim more tokens, all you really need other than Min P is Top K with a value between 40-80

Create a branch at a spot before the degradation and try RPing with those settings, if it still happens let me know

Edit: Btw you might experience less degradation as you RP if you up the token response limit, try a value of 200.

Used your suggestions. It seems like my responses aren't degrading anymore. Thank you!

Sign up or log in to comment