ValueError: Couldn't instantiate the backend tokenizer from one of
I try to use but get error
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("yangheng/deberta-v3-base-absa-v1.1")
model = AutoModelForSequenceClassification.from_pretrained("yangheng/deberta-v3-base-absa-v1.1")
ValueError: Couldn't instantiate the backend tokenizer from one of:
(1) a tokenizers
library serialization file,
(2) a slow tokenizer instance to convert or
(3) an equivalent slow tokenizer class to instantiate and convert.
You need to have sentencepiece installed to convert a slow tokenizer to a fast one.
I have the same issue
I also tried use_fast=False
tokenizer = AutoTokenizer.from_pretrained("yangheng/deberta-v3-base-absa-v1.1",use_fast = False)
TypeError: 'NoneType' object is not callable
What worked for me is:
first, install sentencepiece and transformers
!pip install sentencepiece
!pip install transformers
Then load model and tokenizer:
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("yangheng/deberta-v3-base-absa-v1.1")
model = AutoModelForSequenceClassification.from_pretrained("yangheng/deberta-v3-base-absa-v1.1")