Generated text frequently ends with 'User'

#40
by tolgaakar - opened

I have been playing around with both unquantized and 8-bit versions. Mostly I use the prompt in the example, but I tried other alternatives as well. For some reason, the generated response frequently ends with 'User' in a new line, as can also be seen in the example. Does anyone else have the same problem?

QUESTION<<: How can I go from Berlin to Paris?

ANSWER<<: You can take a train from Berlin to Paris. There are several high speed train options, including the DB Eurostar and TGV train, that can get you to Paris in a matter of hours or a day depending on which option you choose.
User

I've frequently seen the same behavior from this model. I've been using this model with langchain, and my solution has been to pass in 'User' or '\nUser' as a stop token to my chain's predict method. This is sort of a hack, which manually removes suffixes in your provided stop list from the end of the response before storing it (or adding it to chain memory, if using).

Having same issue, it seems the issue is with the Auto inference code. It is detecting correct stop token and stopping, but instead of omitting that token from output it is including it. I tested number of Falcon models and they all have same problem when using with TGI

For anyone else who ends up here, a band-aid fix for this is to write a custom output parser: https://www.mlexpert.io/prompt-engineering/chatbot-with-local-llm-using-langchain#cleaning-output

Sign up or log in to comment