Text Generation
Transformers
PyTorch
English
llama
Inference Endpoints
text-generation-inference

SillyTavern Gibberish Output

#1
by OrangeApples - opened

Has anyone gotten this to work with SillyTavern? All I'm getting is gibberish no matter what context template or instruction format I use. Meanwhile, the text outputs are fine when chatting directly through Oobabooga. Can't tell if the issue is with my ST settings, or if Ooba's API is somehow messing with the text generation.

Lowering the context length to 4k fixed it for some reason.

OrangeApples changed discussion status to closed

Sign up or log in to comment