Blank first response - just an FYI of what works for me

#1
by deleted - opened
deleted

Not a complaint at all, we all appreciate what is being done here and most of us do understand you do not create the original model, and this is just an observation. Recently I noticed that sometimes i first get a blank response from several "python code gen" targeted models, including this one. I saw this even before GGUF format became a thing. But, if i hit continue then it starts responding like it should. ( ooba has this feature, not all web UIs do, unfortunately )

Also, i had to switch to using vicuna1.1 prompt format for many models, even if the source has other ones listed ( especially when it lists alpaca, that never seems to work right for me ). Else i get garbage responses, either not answering the question or just true random nonsense. I also end up using the stock llama-precise parameters. Again, or i get garbage. So not a big deal, just an observation of what works for me in many cases if others see this too.

And again, @TheBloke thanks for all you do for the community.

Sign up or log in to comment