runtime error

.py:34: FutureWarning: `Transformer2DModelOutput` is deprecated and will be removed in version 1.0.0. Importing `Transformer2DModelOutput` from `diffusers.models.transformer_2d` is deprecated and this will be removed in a future version. Please use `from diffusers.models.modeling_outputs import Transformer2DModelOutput`, instead. deprecate("Transformer2DModelOutput", "1.0.0", deprecation_message) Loading pipeline components...: 0%| | 0/9 [00:00<?, ?it/s] Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s] Loading checkpoint shards: 50%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 1/2 [00:02<00:02, 2.52s/it] Loading checkpoint shards: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2/2 [00:04<00:00, 2.36s/it] Loading checkpoint shards: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2/2 [00:04<00:00, 2.38s/it] Loading pipeline components...: 11%|β–ˆ | 1/9 [00:04<00:39, 4.97s/it] Loading pipeline components...: 44%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 4/9 [00:07<00:08, 1.67s/it] Loading pipeline components...: 67%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 6/9 [00:08<00:03, 1.17s/it] Loading pipeline components...: 89%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 8/9 [00:10<00:01, 1.13s/it]You set `add_prefix_space`. The tokenizer needs to be converted from the slow tokenizers Loading pipeline components...: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 9/9 [00:11<00:00, 1.23s/it] Traceback (most recent call last): File "/home/user/app/app.py", line 22, in <module> def generate(prompt, guidance_scale, guidance_rescale, num_inference_steps, resolution, negative_prompt): File "/usr/local/lib/python3.10/site-packages/spaces/zero/decorator.py", line 79, in GPU return _GPU(task, duration) File "/usr/local/lib/python3.10/site-packages/spaces/zero/decorator.py", line 113, in _GPU client.startup_report() File "/usr/local/lib/python3.10/site-packages/spaces/zero/client.py", line 45, in startup_report raise RuntimeError("Error while initializing ZeroGPU: Unknown") RuntimeError: Error while initializing ZeroGPU: Unknown

Container logs:

Fetching error logs...