ValueError: `.to` is not supported for `4-bit` or `8-bit` bitsandbytes models

#12
by shinerlinda - opened

Hello, I have been using Google Colab for a while, and it worked well until recently when I encountered this error while running it. Do you know why this might be happening?

Traceback (most recent call last):
File "/content/drive/MyDrive/joy-caption-pre-alpha/app.py", line 179, in
main()
File "/content/drive/MyDrive/joy-caption-pre-alpha/app.py", line 149, in main
models = load_models()
File "/content/drive/MyDrive/joy-caption-pre-alpha/app.py", line 45, in load_models
text_model = AutoModelForCausalLM.from_pretrained(MODEL_PATH, device_map="auto", torch_dtype=torch.bfloat16).eval()
File "/usr/local/lib/python3.10/dist-packages/transformers/models/auto/auto_factory.py", line 564, in from_pretrained
return model_class.from_pretrained(
File "/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py", line 3990, in from_pretrained
dispatch_model(model, **device_map_kwargs)
File "/usr/local/lib/python3.10/dist-packages/accelerate/big_modeling.py", line 498, in dispatch_model
model.to(device)
File "/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py", line 2839, in to
raise ValueError(
ValueError: .to is not supported for 4-bit or 8-bit bitsandbytes models. Please use the model as it is, since the model has already been set to the correct devices and casted to the correct dtype.

shinerlinda changed discussion title from alueError: `.to` is not supported for `4-bit` or `8-bit` bitsandbytes models to ValueError: `.to` is not supported for `4-bit` or `8-bit` bitsandbytes models

Sign up or log in to comment