Error with Tokenizer

#121
by wissamee - opened

Hello,
I'm currently fine-tuning the "Mistral-7B-Instruct-v0.1" model and I've encountered an issue that I haven't faced before when using the AutoTokenizer from Transformers. Here's the code I'm using:

tokenizer = AutoTokenizer.from_pretrained( base_model_id, padding_side="left", # reduces memory usage add_eos_token=True, add_bos_token=True, ) tokenizer.pad_token = tokenizer.eos_token

However, I'm receiving the following error:

OSError: Can't load tokenizer for 'mistralai/Mistral-7B-Instruct-v0.1'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'mistralai/Mistral-7B-Instruct-v0.1' is the correct path to a directory containing all relevant files for a LlamaTokenizerFast tokenizer.

Does anyone know how to resolve this issue?

Sign up or log in to comment