Missing parameters
Hi, I'm running the suggested code on Colab and getting this warning:
Using sep_token, but it is not set yet.
Using pad_token, but it is not set yet.
Using cls_token, but it is not set yet.
Using mask_token, but it is not set yet.
The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's attention_mask
to obtain reliable results.
Setting pad_token_id
to eos_token_id
:2 for open-end generation.
Is there an easy way to pass those parameters?
Also, what variables are available - temperature, top_k, repetition penalty, stop tokens etc, and how to pass them?
I couldn't find information about it.
This is the code I run:
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1")
model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1")
messages = [{"role":"system","content":"You are a seller at a store that sells only shoes. You are friendly and polite"},{"role":"user","content":"Client: Hello. I want to buy Pizza."}]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, eos_token_id=2, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
By the way, this model is truly amazing.
To generate the attention mask, you can replace this :
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, eos_token_id=2, max_new_tokens=1000, do_sample=True)
with:
encodeds = tokenizer(messages, return_tensors='pt)
encodeds['input_ids'] = encodeds['input_ids'].to(device)
encodeds['attention_mask'] = encodeds['attention_mask'].to(device)
model.to(device)
generated_ids = model.generate(**encodeds, eos_token_id=2, max_new_tokens=1000, do_sample=True)
but really it should not matter that you set all those things or not.
for the parameters check out the documentation of huggingface's model.generate
Since the messages variable is a list of dictionaries, your code gives an error
encodeds = tokenizer(messages, return_tensors='pt')
ValueError: text input must of type str
(single example), List[str]
(batch or single pretokenized example) or List[List[str]]
(batch of pretokenized examples).
It looks like apply_chat_template returns input_ids, and does not have the ability to return the mask.