Duplicating the space - runtime error

#2
by ergo202x - opened

Hello - I attempted to duplicate the space and received the following runtime error when the space was starting. Is this expected? I wasn't sure if this was the correct place to report the issue, can send this somewhere else if its helpful. Thank you!

runtime error

Downloading shards: 0%| | 0/7 [00:00<?, ?it/s]

Downloading shards: 14%|โ–ˆโ– | 1/7 [00:53<05:20, 53.34s/it]

Downloading shards: 29%|โ–ˆโ–ˆโ–Š | 2/7 [01:54<04:50, 58.18s/it]

Downloading shards: 43%|โ–ˆโ–ˆโ–ˆโ–ˆโ–Ž | 3/7 [02:56<03:58, 59.59s/it]

Downloading shards: 57%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‹ | 4/7 [03:56<02:59, 59.70s/it]

Downloading shards: 71%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ– | 5/7 [04:57<02:00, 60.38s/it]

Downloading shards: 86%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Œ | 6/7 [05:59<01:00, 60.89s/it]

Downloading shards: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 7/7 [06:44<00:00, 55.71s/it]
Downloading shards: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 7/7 [06:44<00:00, 57.79s/it]
Traceback (most recent call last):
File "/home/user/app/app.py", line 30, in
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="auto")
File "/home/user/.pyenv/versions/3.10.13/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 565, in from_pretrained
return model_class.from_pretrained(
File "/home/user/.pyenv/versions/3.10.13/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3307, in from_pretrained
) = cls._load_pretrained_model(
File "/home/user/.pyenv/versions/3.10.13/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3428, in _load_pretrained_model
raise ValueError(
ValueError: The current device_map had weights offloaded to the disk. Please provide an offload_folder for them. Alternatively, make sure you have safetensors installed if the model you are using offers the weights in this format.

Container logs:

===== Application Startup at 2023-11-21 19:01:44 =====

Downloading shards: 0%| | 0/7 [00:00<?, ?it/s]

you should do

model = AutoModelForCausalLM.from_pretrained(model_id)
model = model.to(torch.bfloat16)
model = model.to(device)

instead of that line

Sign up or log in to comment