gardner commited on
Commit
8ab48e7
1 Parent(s): 209760d

Add chat_template from allenai/tulu-2-dpo-70b to tokenizer_config.json

Browse files

Add `chat_template` from [allenai/tulu-2-dpo-70b/tokenizer_config.json](https://huggingface.co/allenai/tulu-2-dpo-70b/blob/f33beddfdbbc2ccb4e349f71f515aa3ad983d49b/tokenizer_config.json#L35)

This change includes a `chat_template` in `tokenizer_config.json`. For more information please see [Templates for Chat Models
](https://huggingface.co/docs/transformers/main/chat_templating).

To demonstrate the outcome of this change please see before and after:

### Before

```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("TencentARC/LLaMA-Pro-8B-Instruct", legacy=False)

chat = [
{"role": "user", "content": "Hello, how are you?"},
{"role": "assistant", "content": "I'm doing great. How can I help you today?"},
{"role": "user", "content": "I'd like to show off how chat templating works!"},
{"role": "assistant", "content": "Great, please let me know if I can help."},
]

print(tokenizer.apply_chat_template(chat, tokenize=False))
```
output:
```
$ python3 main.py

No chat template is defined for this tokenizer - using the default template for the LlamaTokenizerFast class. If the default is not appropriate for your model, please set `tokenizer.chat_template` to an appropriate template. See https://huggingface.co/docs/transformers/main/chat_templating for more information.

<s>[INST] <<SYS>>
You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.

If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
<</SYS>>

Hello, how are you? [/INST] I'm doing great. How can I help you today? </s><s>[INST] I'd like to show off how chat templating works! [/INST] Great, please let me know if I can help. </s>
```

### After

If we modify the tokenizer to use a `chat_template`, we can see the difference:
```diff
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("TencentARC/LLaMA-Pro-8B-Instruct", legacy=False)

+ tokenizer.chat_template = "{% for message in messages %}\n{% if message['role'] == 'user' %}\n{{ '<|user|>\n' + message['content'] }}\n{% elif message['role'] == 'assistant' %}\n{{ '<|assistant|>\n' + message['content'] + eos_token }}\n{% endif %}\n{% if loop.last and add_generation_prompt %}\n{{ '<|assistant|>' }}\n{% endif %}\n{% endfor %}"

chat = [
{"role": "user", "content": "Hello, how are you?"},
{"role": "assistant", "content": "I'm doing great. How can I help you today?"},
{"role": "user", "content": "I'd like to show off how chat templating works!"},
{"role": "assistant", "content": "Great, please let me know if I can help."},
]

print(tokenizer.apply_chat_template(chat, tokenize=False))
```

Which outputs:

```
$ python3 main.py
<|user|>
Hello, how are you?
<|assistant|>
I'm doing great. How can I help you today?</s>
<|user|>
I'd like to show off how chat templating works!
<|assistant|>
Great, please let me know if I can help.</s>

```

Please see [TencentARC/LLaMA-Pro-8B-Instruct/discussions/3](https://huggingface.co/TencentARC/LLaMA-Pro-8B-Instruct/discussions/3).

Files changed (1) hide show
  1. tokenizer_config.json +2 -1
tokenizer_config.json CHANGED
@@ -33,5 +33,6 @@
33
  "rstrip": false,
34
  "single_word": false
35
  },
36
- "use_default_system_prompt": true
 
37
  }
 
33
  "rstrip": false,
34
  "single_word": false
35
  },
36
+ "use_default_system_prompt": true,
37
+ "chat_template": "{% for message in messages %}\n{% if message['role'] == 'user' %}\n{{ '<|user|>\n' + message['content'] }}\n{% elif message['role'] == 'assistant' %}\n{{ '<|assistant|>\n' + message['content'] + eos_token }}\n{% endif %}\n{% if loop.last and add_generation_prompt %}\n{{ '<|assistant|>' }}\n{% endif %}\n{% endfor %}"
38
  }