Please add <|eot_id|> as a stop token to the HF config

#12
by omarkilani - opened

Both <|end_of_text|> and <|eot_id|> should be in the config, like they are over at:

https://github.com/meta-llama/llama3/blob/359887376f0aaf30e433f23e25df858d8c2a9833/llama/generation.py#L173

Thanks!

-- Groq

omarkilani changed discussion title from Please add <eot_id> as a stop token to the HF config to Please add <|eot_id|> as a stop token to the HF config

yeah, generation does not stop and keeps generating till max tokens reached.

Meta Llama org

Fixed!

pcuenq changed discussion status to closed

I am still facing this issue. how can i make the response stop?

I fixed the issue by pointing to a tokenizer of another model card which is llama3-70b-Instruct.

@solankibhargav @omarkilani
It seems my model is still generating indefinitely. Where should I include both stop tokens?

@Iionbarista I pointed my tokenizer to https://huggingface.co/v2ray/Llama-3-70B-Instruct and that solved my problem

Sign up or log in to comment