Why such a bad output?
#1
by
anon7463435254
- opened
Maybe lower temperature? It should reduce the randomness at which the model ouputs. Also what would be interesting to know is how it compares to the original LlaMa-2 Chat model?
I usually don't test models for factual replies, as I'm more interested in using them for creative writing and roleplay chat, but from what I've experimented LlaMa-2 chat is pretty good (except for the damn censorship and moralist agenda, of course). However, finetuning models sometimes seems to damage some of the quality of the original model, even if they release us from the shady agendas