According to tokenizer_config.json the eos token should be <|im_end|>, and according to my own testing this is correct, using the wrong token results in infinite generation responses.

Hi 👋 @CISCai Hello, could I see how you are specifically using it (such as the inference code)? This would help us accurately reproduce your issue. Thank you very much! 🙏

GGUFs get broken by this, compare my GGUFs to all others, I'm guessing transformers override the value from special_tokens_map.json and thus function correctly, however llama.cpps conversion script gets it from config.json.

Thank you very much for your reply, we had an update to the tokenizer_config.json file about 6 hours ago, are you still having problems with the model?

Yes, that only addressed the tokenization of <|im_start|>, which was a different issue.

If you use Huggingface's GGUF viewer you can see that all other GGUFs have tokenizer.ggml.eos_token_id set to 2, instead of 7, which is the correct value.

BTW, in case some of the other GGUFs never get updated and someone stumbles upon this PR later, you can use my GGUF Editor to easily set the correct token and download an updated version of the GGUF. :)

Hi @CISCai , thank you very much for your contribution! I've fixed the issue. Thank you again so much! 🙏

haijian06 changed pull request status to merged

Sign up or log in to comment