Text Classification
Transformers
Safetensors
mistral
feature-extraction
reward_model
custom_code
text-generation-inference

Using Mistral's chat_template produces different text than the demo.

#8
by EutronH - opened

For the following conversation, the input text using the template in the demo and the one using Mistral's chat_template are different.

Conversation: [{"role": "user", "content": "Hello"}, {"role": "assistant": "content": "How can I help you today?"}]

  1. the input text using the template in the demo:
    "[INST] Hello [/INST] How can I help you today?"

  2. the input text using Mistral's official tokenizer.chat_template:
    "<s>[INST] Hello [/INST]How can I help you today?</s>"

There are totally three differences:
i. 2. has a bos token while 1. does not
ii. 2. has an eos token while 1. does not
iii. 1. has a space between "[/INST]" and "How" while 2. does not

Which one is the correct one?

Sign up or log in to comment