Hugging Face's logo
kadirnar
/
Runtime error

runtime error

Exit code: 1. Reason: The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`. 0it [00:00, ?it/s] 0it [00:00, ?it/s] Traceback (most recent call last): File "/home/user/app/app.py", line 16, in <module> mask_predictor = AutoMasker( File "/home/user/app/utils/garment_agnostic_mask_predictor.py", line 211, in __init__ self.densepose_processor = DensePose(densepose_path, device) File "/home/user/app/utils/densepose_for_mask.py", line 40, in __init__ self.predictor = DefaultPredictor(self.cfg) File "/home/user/app/detectron2/engine/defaults.py", line 282, in __init__ self.model = build_model(self.cfg) File "/home/user/app/detectron2/modeling/meta_arch/build.py", line 23, in build_model model.to(torch.device(cfg.MODEL.DEVICE)) File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1174, in to return self._apply(convert) File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 780, in _apply module._apply(fn) File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 780, in _apply module._apply(fn) File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 805, in _apply param_applied = fn(param) File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1160, in convert return t.to( File "/usr/local/lib/python3.10/site-packages/torch/cuda/__init__.py", line 314, in _lazy_init torch._C._cuda_init() RuntimeError: No CUDA GPUs are available

Container logs:

Fetching error logs...