<|eot_id|><|start_header_id|>assistant<|end_header_id|> in model outputs

#5
by willsims - opened

This model has a ton of potential and thanks for making it available to everyone. I'm hosting your model via inference endpoints on hugging face and we have an issue where <|eot_id|><|start_header_id|>assistant<|end_header_id|> is included in model outputs. Is this expected or do you know how we can fix it? Additionally, when we limit the token length, responses get cut off. Is the best way to target a certain message length through prompt engineering?

Hi @willsims !

Is this expected or do you know how we can fix it?

Yes, it is expected, because that's the chat template both Meta and me used. If you use huggingface you can use this https://huggingface.co/docs/transformers/main/en/chat_templating to control it, but you can just filter out all the <| ... |>

Is the best way to target a certain message length through prompt engineering?
I think so. If you are using a low max_tokens in the response, you can try specifying in the system prompt that it answers with short and concise responses, or something like it. Hope it helps!

This is very helpful, thank you @vicgalle !

Sign up or log in to comment