runtime error

Exit code: 1. Reason: ding wheel for flash-attn (setup.py): started Building wheel for flash-attn (setup.py): finished with status 'done' Created wheel for flash-attn: filename=flash_attn-2.7.4.post1-py3-none-any.whl size=187788763 sha256=a8719a86d0cd12a105739151f9e45ac8c2ecb840a83ed7a080a88d1645a98440 Stored in directory: /home/user/.cache/pip/wheels/59/ce/d5/08ea07bfc16ba218dc65a3a7ef9b6a270530bcbd2cea2ee1ca Successfully built flash-attn Installing collected packages: einops, flash-attn Successfully installed einops-0.8.1 flash-attn-2.7.4.post1 [notice] A new release of pip is available: 24.2 -> 25.0.1 [notice] To update, run: /usr/local/bin/python3.10 -m pip install --upgrade pip Loading CLIP Loading VLM's custom vision model Loading tokenizer Loading LLM: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2 Downloading shards: 0%| | 0/4 [00:00<?, ?it/s] Downloading shards: 25%|██▌ | 1/4 [00:11<00:33, 11.22s/it] Downloading shards: 50%|█████ | 2/4 [00:22<00:22, 11.21s/it] Downloading shards: 75%|███████▌ | 3/4 [00:39<00:13, 13.81s/it] Downloading shards: 100%|██████████| 4/4 [00:43<00:00, 9.93s/it] Downloading shards: 100%|██████████| 4/4 [00:43<00:00, 10.83s/it] Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s] Loading checkpoint shards: 100%|██████████| 4/4 [00:00<00:00, 5.22it/s] Loading VLM's custom text model The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable. Loading image adapter pixtral_model: <class 'NoneType'> pixtral_processor: <class 'NoneType'> Traceback (most recent call last): File "/home/user/app/app.py", line 3, in <module> from joycaption import stream_chat_mod, get_text_model, change_text_model, get_repo_gguf File "/home/user/app/joycaption.py", line 237, in <module> @spaces.GPU() TypeError: spaces.GPU() missing 1 required positional argument: 'func'

Container logs:

Fetching error logs...