Spaces:
Runtime error
Runtime error
First model version
Browse files- checkpoints +0 -1
- depth_pro.pt → checkpoints/depth_pro.pt +0 -0
- src/depth_pro.egg-info/PKG-INFO +0 -111
- src/depth_pro.egg-info/SOURCES.txt +0 -28
- src/depth_pro.egg-info/dependency_links.txt +0 -1
- src/depth_pro.egg-info/entry_points.txt +0 -2
- src/depth_pro.egg-info/requires.txt +0 -6
- src/depth_pro.egg-info/top_level.txt +0 -1
checkpoints
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
a
|
|
|
|
depth_pro.pt → checkpoints/depth_pro.pt
RENAMED
File without changes
|
src/depth_pro.egg-info/PKG-INFO
DELETED
@@ -1,111 +0,0 @@
|
|
1 |
-
Metadata-Version: 2.1
|
2 |
-
Name: depth_pro
|
3 |
-
Version: 0.1
|
4 |
-
Summary: Inference/Network/Model code for Apple Depth Pro monocular depth estimation.
|
5 |
-
Project-URL: Homepage, https://github.com/apple/ml-depth-pro
|
6 |
-
Project-URL: Repository, https://github.com/apple/ml-depth-pro
|
7 |
-
Description-Content-Type: text/markdown
|
8 |
-
License-File: LICENSE
|
9 |
-
Requires-Dist: torch
|
10 |
-
Requires-Dist: torchvision
|
11 |
-
Requires-Dist: timm
|
12 |
-
Requires-Dist: numpy<2
|
13 |
-
Requires-Dist: pillow_heif
|
14 |
-
Requires-Dist: matplotlib
|
15 |
-
|
16 |
-
## Depth Pro: Sharp Monocular Metric Depth in Less Than a Second
|
17 |
-
|
18 |
-
This software project accompanies the research paper:
|
19 |
-
**Depth Pro: Sharp Monocular Metric Depth in Less Than a Second**,
|
20 |
-
*Aleksei Bochkovskii, Amaël Delaunoy, Hugo Germain, Marcel Santos, Yichao Zhou, Stephan R. Richter, and Vladlen Koltun*.
|
21 |
-
|
22 |
-
![](data/depth-pro-teaser.jpg)
|
23 |
-
|
24 |
-
We present a foundation model for zero-shot metric monocular depth estimation. Our model, Depth Pro, synthesizes high-resolution depth maps with unparalleled sharpness and high-frequency details. The predictions are metric, with absolute scale, without relying on the availability of metadata such as camera intrinsics. And the model is fast, producing a 2.25-megapixel depth map in 0.3 seconds on a standard GPU. These characteristics are enabled by a number of technical contributions, including an efficient multi-scale vision transformer for dense prediction, a training protocol that combines real and synthetic datasets to achieve high metric accuracy alongside fine boundary tracing, dedicated evaluation metrics for boundary accuracy in estimated depth maps, and state-of-the-art focal length estimation from a single image.
|
25 |
-
|
26 |
-
|
27 |
-
The model in this repository is a reference implementation, which has been re-trained. Its performance is close to the model reported in the paper but does not match it exactly.
|
28 |
-
|
29 |
-
## Getting Started
|
30 |
-
|
31 |
-
We recommend setting up a virtual environment. Using e.g. miniconda, the `depth_pro` package can be installed via:
|
32 |
-
|
33 |
-
```bash
|
34 |
-
conda create -n depth-pro -y python=3.9
|
35 |
-
conda activate depth-pro
|
36 |
-
|
37 |
-
pip install -e .
|
38 |
-
```
|
39 |
-
|
40 |
-
To download pretrained checkpoints follow the code snippet below:
|
41 |
-
```bash
|
42 |
-
source get_pretrained_models.sh # Files will be downloaded to `checkpoints` directory.
|
43 |
-
```
|
44 |
-
|
45 |
-
### Running from commandline
|
46 |
-
|
47 |
-
We provide a helper script to directly run the model on a single image:
|
48 |
-
```bash
|
49 |
-
# Run prediction on a single image:
|
50 |
-
depth-pro-run -i ./data/example.jpg
|
51 |
-
# Run `depth-pro-run -h` for available options.
|
52 |
-
```
|
53 |
-
|
54 |
-
### Running from python
|
55 |
-
|
56 |
-
```python
|
57 |
-
from PIL import Image
|
58 |
-
import depth_pro
|
59 |
-
|
60 |
-
# Load model and preprocessing transform
|
61 |
-
model, transform = depth_pro.create_model_and_transforms()
|
62 |
-
model.eval()
|
63 |
-
|
64 |
-
# Load and preprocess an image.
|
65 |
-
image, _, f_px = depth_pro.load_rgb(image_path)
|
66 |
-
image = transform(image)
|
67 |
-
|
68 |
-
# Run inference.
|
69 |
-
prediction = model.infer(image, f_px=f_px)
|
70 |
-
depth = prediction["depth"] # Depth in [m].
|
71 |
-
focallength_px = prediction["focallength_px"] # Focal length in pixels.
|
72 |
-
```
|
73 |
-
|
74 |
-
|
75 |
-
### Evaluation (boundary metrics)
|
76 |
-
|
77 |
-
Our boundary metrics can be found under `eval/boundary_metrics.py` and used as follows:
|
78 |
-
|
79 |
-
```python
|
80 |
-
# for a depth-based dataset
|
81 |
-
boundary_f1 = SI_boundary_F1(predicted_depth, target_depth)
|
82 |
-
|
83 |
-
# for a mask-based dataset (image matting / segmentation)
|
84 |
-
boundary_recall = SI_boundary_Recall(predicted_depth, target_mask)
|
85 |
-
```
|
86 |
-
|
87 |
-
|
88 |
-
## Citation
|
89 |
-
|
90 |
-
If you find our work useful, please cite the following paper:
|
91 |
-
|
92 |
-
```bibtex
|
93 |
-
@article{Bochkovskii2024:arxiv,
|
94 |
-
author = {Aleksei Bochkovskii and Ama\"{e}l Delaunoy and Hugo Germain and Marcel Santos and
|
95 |
-
Yichao Zhou and Stephan R. Richter and Vladlen Koltun}
|
96 |
-
title = {Depth Pro: Sharp Monocular Metric Depth in Less Than a Second},
|
97 |
-
journal = {arXiv},
|
98 |
-
year = {2024},
|
99 |
-
}
|
100 |
-
```
|
101 |
-
|
102 |
-
## License
|
103 |
-
This sample code is released under the [LICENSE](LICENSE) terms.
|
104 |
-
|
105 |
-
The model weights are released under the [LICENSE](LICENSE) terms.
|
106 |
-
|
107 |
-
## Acknowledgements
|
108 |
-
|
109 |
-
Our codebase is built using multiple opensource contributions, please see [Acknowledgements](ACKNOWLEDGEMENTS.md) for more details.
|
110 |
-
|
111 |
-
Please check the paper for a complete list of references and datasets used in this work.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
src/depth_pro.egg-info/SOURCES.txt
DELETED
@@ -1,28 +0,0 @@
|
|
1 |
-
ACKNOWLEDGEMENTS.md
|
2 |
-
CODE_OF_CONDUCT.md
|
3 |
-
CONTRIBUTING.md
|
4 |
-
LICENSE
|
5 |
-
README.md
|
6 |
-
get_pretrained_models.sh
|
7 |
-
pyproject.toml
|
8 |
-
data/depth-pro-teaser.jpg
|
9 |
-
data/example.jpg
|
10 |
-
src/depth_pro/__init__.py
|
11 |
-
src/depth_pro/depth_pro.py
|
12 |
-
src/depth_pro/utils.py
|
13 |
-
src/depth_pro.egg-info/PKG-INFO
|
14 |
-
src/depth_pro.egg-info/SOURCES.txt
|
15 |
-
src/depth_pro.egg-info/dependency_links.txt
|
16 |
-
src/depth_pro.egg-info/entry_points.txt
|
17 |
-
src/depth_pro.egg-info/requires.txt
|
18 |
-
src/depth_pro.egg-info/top_level.txt
|
19 |
-
src/depth_pro/cli/__init__.py
|
20 |
-
src/depth_pro/cli/run.py
|
21 |
-
src/depth_pro/eval/boundary_metrics.py
|
22 |
-
src/depth_pro/eval/dis5k_sample_list.txt
|
23 |
-
src/depth_pro/network/__init__.py
|
24 |
-
src/depth_pro/network/decoder.py
|
25 |
-
src/depth_pro/network/encoder.py
|
26 |
-
src/depth_pro/network/fov.py
|
27 |
-
src/depth_pro/network/vit.py
|
28 |
-
src/depth_pro/network/vit_factory.py
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
src/depth_pro.egg-info/dependency_links.txt
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
|
|
|
|
src/depth_pro.egg-info/entry_points.txt
DELETED
@@ -1,2 +0,0 @@
|
|
1 |
-
[console_scripts]
|
2 |
-
depth-pro-run = depth_pro.cli:run_main
|
|
|
|
|
|
src/depth_pro.egg-info/requires.txt
DELETED
@@ -1,6 +0,0 @@
|
|
1 |
-
torch
|
2 |
-
torchvision
|
3 |
-
timm
|
4 |
-
numpy<2
|
5 |
-
pillow_heif
|
6 |
-
matplotlib
|
|
|
|
|
|
|
|
|
|
|
|
|
|
src/depth_pro.egg-info/top_level.txt
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
depth_pro
|
|
|
|