Error

#5
by OFT - opened

Can't load tokenizer using from_pretrained, please update its configuration: <class 'transformers.models.vision_encoder_decoder.configuration_vision_encoder_decoder.VisionEncoderDecoderConfig'>

Owner

Hi @OFT , just tried loading the tokenizer using the code in the model card. It worked without any issues, could you please try again and let me know :)

Hi @bipin ,sorry for my late response. I only noticed the popup message now.

I tried it again and received the following error:

{"error":"Can't load tokenizer using from_pretrained, please update its configuration: <class 'transformers.models.vision_encoder_decoder.configuration_vision_encoder_decoder.VisionEncoderDecoderConfig'>"}

Owner

Hi @bipin ,sorry for my late response. I only noticed the popup message now.

I tried it again and received the following error:

{"error":"Can't load tokenizer using from_pretrained, please update its configuration: <class 'transformers.models.vision_encoder_decoder.configuration_vision_encoder_decoder.VisionEncoderDecoderConfig'>"}

Which version of the transformers library are you using? Are you trying with the code in the model card and are you running on Google colab or locally?

Hi @bipin ,sorry for my late response. I only noticed the popup message now.

I tried it again and received the following error:

{"error":"Can't load tokenizer using from_pretrained, please update its configuration: <class 'transformers.models.vision_encoder_decoder.configuration_vision_encoder_decoder.VisionEncoderDecoderConfig'>"}

Which version of the transformers library are you using? Are you trying with the code in the model card and are you running on Google colab or locally?

I used the "Inference API" on the "model card"-page of https://huggingface.co/bipin/image-caption-generator

Owner

Hi @OFT , the same issue is discussed in #1.
Closing the issue for now but feel free to re-open if the solution mentioned doesn't work.

bipin changed discussion status to closed

Sign up or log in to comment