Weird pronoun and anatomy issues

#2
by raveninrhythm - opened

I'll be the first to admit that I'm not that knowledgeable about these things, but I keep running into an odd issue where these line of models built on the whole Stheno Horror model struggles with anatomy and pronouns. I can sometimes fix it with a more restrictive top-p sampling and temperature value, but it seems to overall struggle in that realm regardless of how specific you get in the prompt/memory/world info areas. I also get spawning of random hashtags, OOC, and sometimes it'll number each sentence for whatever reason. Dunno how common these issues are, or how to fix them, but figured I'd give a heads up!

Owner

A few other users reported the same. Did you try a different quant(s)?
Also; other versions (grand horror series) may be more suitable.

I just dropped also , Grand Story WTFRack-Instruct (18B) today.
This is stronger in instruction following, and balanced - but it also does horror too.
I would say only a step or two below "Grand Horror" in terms of "horror" output so to speak:

https://huggingface.co/DavidAU/L3-SMB-Grand-Story-WTFrack-Instruct-18.05B-GGUF

( I will be updating the model card shortly )

DavidAU changed discussion status to closed

Nope, I've only tried the 3km quant, so I'll go ahead and see if 4km and 5km has the same issue!

Also, since I kinda do use this model less for horror, I think the other model may actually work a little better for me (although I LOVE the horror potential). Thanks for the tips! I'll report back if I run into the same issues!

Wanted to report back!

Tried 3km, 4km, and 5km quants for Hathor's Revenge and 5km quant for Grand Story WTFrack Instruction. Pronoun issues pretty much consistent regardless of any instructions provided on that end. It seems hardwired to any mention of genitalia. Made extra bizarre/hilarious due to models bias towards butts and anatomical misunderstanding where the human butt is.

@raveninrhythm

This may help...

Update: I have done some research into this issue ; here is how to address it:

In "KoboldCpp" or "oobabooga/text-generation-webui" or "Silly Tavern" ;

Set the "Smoothing_factor" to 1.5 to 2.5
: in KoboldCpp -> Settings->Samplers->Advanced-> "Smooth_F"
: in text-generation-webui -> parameters -> lower right.
: In Silly Tavern this is called: "Smoothing"

NOTE: For "text-generation-webui"
-> if using GGUFs you need to use "llama_HF" (which involves downloading some config files from the SOURCE version of this model)

Source versions (and config files) of my models are here:
https://huggingface.co/collections/DavidAU/d-au-source-files-for-gguf-exl2-awq-gptq-hqq-etc-etc-66b55cb8ba25f914cbf210be

OTHER OPTIONS:

  • Increase rep pen to 1.1 to 1.15 (you don't need to do this if you use "smoothing_factor".

  • If the interface/program you are using to run AI MODELS supports "Quadratic Sampling" ("smoothing") just make the adjustment as noted.

Hey, thanks for reaching back to me! I've never tried the smoothing factor thing but it seems to be helping for sure! Also, just wanna say I'm loving your latest models: they've been pretty fun to play with!

@raveninrhythm

This document covers all the parameters / settings and samplers and talks in depth about using "Smoothing" to control some of my more difficult models.
All models I have made have been "Classed" (on the repo card per model).

https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters

This document can be used to fine tune control of ANY model , any quant (from any repo) too.

Sign up or log in to comment