Spaces:
Running
on
Zero
Apply for community grant: Academic project (gpu)
We are releasing the demo for the paper "FreeSplatter: Pose-free Gaussian Splatting for Sparse-view 3D Reconstruction".
Project page: https://bluestyle97.github.io/projects/freesplatter/
We would greatly appreciate any assistance from the Hugging Face community in using ZeroGPU. Thank you.
Hi @bluestyle97 , we've assigned ZeroGPU to this Space. Please check the compatibility and usage sections of this page so your Space can run on ZeroGPU.
@hysts Hi, I encountered the following error when running this space on ZeroGPU. Could you please help me figure out the problem?
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/spaces/zero/wrappers.py", line 135, in worker_init
torch.init(nvidia_uuid)
File "/usr/local/lib/python3.10/site-packages/spaces/zero/torch/patching.py", line 350, in init
torch.Tensor([0]).cuda()
File "/usr/local/lib/python3.10/site-packages/torch/cuda/__init__.py", line 314, in _lazy_init
torch._C._cuda_init()
RuntimeError: Unexpected error from cudaGetDeviceCount(). Did you run some cuda functions before calling NumCudaDevices() that might have already set an error? Error 304: OS call failed or operation not supported on this OS
INFO:httpx:HTTP Request: POST http://device-api.zero/release?allowToken=b38808f70e8417c8405d46c66833078e272f7e60add2238b3225e494238d93de&fail=true "HTTP/1.1 200 OK"
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/gradio/queueing.py", line 536, in process_events
response = await route_utils.call_process_api(
File "/usr/local/lib/python3.10/site-packages/gradio/route_utils.py", line 322, in call_process_api
output = await app.get_blocks().process_api(
File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 1935, in process_api
result = await self.call_function(
File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 1520, in call_function
prediction = await anyio.to_thread.run_sync( # type: ignore
File "/usr/local/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2505, in run_sync_in_worker_thread
return await future
File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 1005, in run
result = context.run(func, *args)
File "/usr/local/lib/python3.10/site-packages/gradio/utils.py", line 826, in wrapper
response = f(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/spaces/zero/wrappers.py", line 214, in gradio_handler
raise res.value
RuntimeError: Unexpected error from cudaGetDeviceCount(). Did you run some cuda functions before calling NumCudaDevices() that might have already set an error? Error 304: OS call failed or operation not supported on this OS
I saw the same error in the TRELLIS Space earlier. Apparently, the TRELLIS Space author fixed it by removing kaolin from their dependencies. https://huggingface.co/spaces/JeffreyXiang/TRELLIS/discussions/1 Not sure but it might be related.
We didn't use kaolin in this project, so I think it's caused by other problems.
Well, yeah, that's true, but I meant to say that some dependency of your Space might be calling CUDA related code.