transformers load fails?

#6
by bdambrosio - opened

latest transformers,:
python3 -m pip install --upgrade transformers

raceback (most recent call last):
File "/home/bruce/.local/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 951, in from_pretrained
config_class = CONFIG_MAPPING[config_dict["model_type"]]
File "/home/bruce/.local/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 653, in getitem
raise KeyError(key)
KeyError: 'gemma2'

Am I doing something stupid?

I have the same problem

just found a transformers update wheel!
Look in the transformers folder in the 'Files and versions' tab.

bdambrosio changed discussion status to closed

I've faced same issue.

pip uninstall transfomers
pip install git+https://github.com/huggingface/transformers

you need latest version from source!

"transformers_version": "4.42.0.dev0"

tnx. update wheel in transformers subdir in files&versions seems to fix it also, but I'm now getting non-compliant prompt responses (ie, not following instructions every other model follows). using the chat-template in tokenizer_config, not sure what's wrong, will debug later. Oh well.

bdambrosio changed discussion status to open

I'm also experiencing an issue where the model simply outputs its response as-is when I use it.
Additionally, I encounter an error during generation when I set use_cache=True in model.generate().
The error occurs in the torch.arange() function, but I'm not sure why this is happening.

TypeError: arange() received an invalid combination of arguments - got (NoneType, int, device=torch.device), but expected one of: * (Number end, *, Tensor out, torch.dtype dtype, torch.layout layout, torch.device device, bool pin_memory, bool requires_grad)

Google org

You should use te latest version of transformers pip install -U transformers

Google org

Are the right chat formatting templates being used here?

bdambrosio changed discussion status to closed

Sign up or log in to comment