Missing BOS token in tokenized text
#1
by
ZhaofengWu
- opened
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("RLHFlow/LLaMA3-SFT")
>>> tokenizer.apply_chat_template([{"role": "user", "content": "test"}])
[128006, 882, 128007, 198, 1985, 128009, 198]
>>> tokenizer.convert_ids_to_tokens(tokenizer.apply_chat_template([{"role": "user", "content": "test"}]))
['<|start_header_id|>', 'user', '<|end_header_id|>', 'Ċ', 'test', '<|eot_id|>', 'Ċ']
The BOS token is not added to the tokenized text. This is in contrast to, for example, llama-3-instruct's tokenizer which does add this token (see the chat template in https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct/blob/main/tokenizer_config.json). Is this intentional?
The llama3-instruct model updates the tokenizer twice due to bugs after our project.
We fix the first bug but do not fix the bos issue so the model is trained without the bos. That is why we also delete the bos token when we service our reward model. But we found that it did not influence that much. For instance, it leads to less then 1% accuracy in reward model accuracy.
Got it, thank you!
ZhaofengWu
changed discussion status to
closed