Abstract
Missing values remain a common challenge for depth data across its wide range of applications, stemming from various causes like incomplete data acquisition and perspective alteration. This work bridges this gap with DepthLab, a foundation depth inpainting model powered by image diffusion priors. Our model features two notable strengths: (1) it demonstrates resilience to depth-deficient regions, providing reliable completion for both continuous areas and isolated points, and (2) it faithfully preserves scale consistency with the conditioned known depth when filling in missing values. Drawing on these advantages, our approach proves its worth in various downstream tasks, including 3D scene inpainting, text-to-3D scene generation, sparse-view reconstruction with DUST3R, and LiDAR depth completion, exceeding current solutions in both numerical performance and visual quality. Our project page with source code is available at https://johanan528.github.io/depthlab_web/.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- HI-SLAM2: Geometry-Aware Gaussian SLAM for Fast Monocular Scene Reconstruction (2024)
- How to Use Diffusion Priors under Sparse Views? (2024)
- TDCNet: Transparent Objects Depth Completion with CNN-Transformer Dual-Branch Parallel Network (2024)
- TSGaussian: Semantic and Depth-Guided Target-Specific Gaussian Splatting from Sparse Views (2024)
- Prompting Depth Anything for 4K Resolution Accurate Metric Depth Estimation (2024)
- GANESH: Generalizable NeRF for Lensless Imaging (2024)
- RoomPainter: View-Integrated Diffusion for Consistent Indoor Scene Texturing (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper