dpt-swinv2-tiny-256 / README.md
bconsolvo's picture
Update README.md (#4)
f7f350e verified
---
license: mit
tags:
- vision
- depth-estimation
model-index:
- name: dpt-swinv2-tiny-256
results:
- task:
type: monocular-depth-estimation
name: Monocular Depth Estimation
dataset:
type: MIX-6
name: MIX-6
metrics:
- type: Zero-shot transfer
value: 10.82
name: Zero-shot transfer
config: Zero-shot transfer
verified: false
---
# Midas 3.1 DPT (Intel/dpt-swinv2-tiny-256 using Swinv2 backbone)
DPT (Dense Prediction Transformer) model trained on 1.4 million images for monocular depth estimation. It was introduced in the paper [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) by Ranftl et al. (2021) and first released in [this repository](https://github.com/isl-org/MiDaS/tree/master).
**Disclaimer:** The team releasing DPT did not write a model card for this model so this model card has been written by Intel and the Hugging Face team.
# Overview of Monocular depth estimation
The aim of Monocular depth estimation is to infer detailed depth from a single image or camera view, finds applications in fields like generative AI, 3D reconstruction, and autonomous driving. However, deriving depth from individual pixels in a single image is challenging due to the under constrained nature of the problem. Recent advancements attribute progress to learning-based methods, particularly with MiDaS, leveraging dataset mixing and scale-and-shift-invariant loss. MiDaS has evolved with releases featuring more powerful backbones and lightweight variants for mobile use. With the rise of transformer architectures in computer vision, including those pioneered by models like ViT,and Swin, and SwinV2 there's been a shift towards using them for depth estimation. Inspired by this, MiDaS v3.1 incorporates promising transformer-based encoders alongside traditional convolutional ones, aiming for a comprehensive investigation of depth estimation techniques. The paper focuses on describing the integration of these backbones into MiDaS, providing a thorough comparison of different v3.1 models, and offering guidance on utilizing future backbones with MiDaS.
Swin Transformer (the name Swin stands for Shifted window) is initially described in arxiv, which capably serves as a general-purpose backbone for computer vision. It is basically a hierarchical Transformer whose representation is computed with shifted windows. The shifted windowing scheme brings greater efficiency by limiting self-attention computation to non-overlapping local windows while also allowing for cross-window connection.
Swin Transformer achieves strong performance on COCO object detection (58.7 box AP and 51.1 mask AP on test-dev) and ADE20K semantic segmentation (53.5 mIoU on val), surpassing previous models by a large margin.
| Input Image | Output Depth Image |
| --- | --- |
| ![input image](https://cdn-uploads.huggingface.co/production/uploads/63dc702662dc193e6d460f1b/PDwRwuryaO3YtuyRjraiM.jpeg) | ![Depth image](https://cdn-uploads.huggingface.co/production/uploads/63dc702662dc193e6d460f1b/ugqri6LcqJBuU9zI9aeqN.jpeg) |
# Videos
![MiDaS Depth Estimation | Intel Technology](https://cdn-uploads.huggingface.co/production/uploads/641bd18baebaa27e0753f2c9/u-KwRFIQhMWiFraSTTBkc.png)
MiDaS Depth Estimation is a machine learning model from Intel Labs for monocular depth estimation. It was trained on up to 12 datasets and covers both in-and outdoor scenes. Multiple different MiDaS models are available, ranging from high quality depth estimation to lightweight models for mobile downstream tasks (https://github.com/isl-org/MiDaS).
## Model description
This Midas 3.1 DPT model uses the [SwinV2 Philosophy]( https://huggingface.co/docs/transformers/en/model_doc/swinv2) model as backbone and uses a different approach to Vision that Beit, where Swin backbones focus more on using a hierarchical approach.
![model image]( https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/swin_transformer_architecture.png)
The previous release MiDaS v3.0 solely leverages the
vanilla vision transformer ViT, MiDaS v3.1 offers additional models based on BEiT, Swin, SwinV2, Next-ViT and LeViT.
# Midas 3.1 DPT Model(Swin backbone)
This model refers to Intel dpt-swinv2-tiny-256 based on the Swin backbone. The arxiv paper compares both Beit and Swin backbones.
The highest quality depth estimation is achieved using the BEiT transformer. We provide variants such as Swin-L, SwinV2-L, SwinV2-B, SwinV2-T, where the numbers signify training resolutions of 512x512 and 384x384, while the letters denote large and base models respectively.
DPT (Dense Prediction Transformer) model trained on 1.4 million images for monocular depth estimation. It was introduced in the paper [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) by Ranftl et al. (2021) and first released in [this repository](https://github.com/isl-org/MiDaS/tree/master).
This model card refers specifically to SwinV2, in the paper, and is referred to dpt-swinv2-tiny-256. A more recent paper from 2013, specifically discussing Swin and SwinV2, is in this paper [MiDaS v3.1 – A Model Zoo for Robust Monocular Relative Depth Estimation
](https://arxiv.org/pdf/2307.14460.pdf)
The model card has been written in combination by the Hugging Face team and Intel.
| Model Detail | Description |
| ----------- | ----------- |
| Model Authors - Company | Intel |
| Date | March 18, 2024 |
| Version | 1 |
| Type | Computer Vision - Monocular Depth Estimation |
| Paper or Other Resources | [MiDaS v3.1 – A Model Zoo for Robust Monocular Relative Depth Estimation](https://arxiv.org/pdf/2307.14460.pdf) and [GitHub Repo](https://github.com/isl-org/MiDaS/blob/master/README.md) |
| License | MIT |
| Questions or Comments | [Community Tab](https://huggingface.co/Intel/dpt-swinv2-tiny-256/discussions) and [Intel Developers Discord](https://discord.gg/rv2Gp55UJQ)|
| Intended Use | Description |
| ----------- | ----------- |
| Primary intended uses | You can use the raw model for zero-shot monocular depth estimation. See the [model hub](https://huggingface.co/models?search=dpt-beit-large) to look for fine-tuned versions on a task that interests you. |
| Primary intended users | Anyone doing monocular depth estimation |
| Out-of-scope uses | This model in most cases will need to be fine-tuned for your particular task. The model should not be used to intentionally create hostile or alienating environments for people.|
## How to use
Be sure the to update PyTorch as Transformers as mismatches in versions can generate erros such as: "TypeError: unsupported operand type(s) for //: 'NoneType' and 'NoneType'".
As tested by this contributor, the following versions ran correctly:
```python
import torch
import transformers
print(torch.__version__)
print(transformers.__version__)
```
```bash
out: '2.2.1+cpu'
out: '4.37.2'
```
### To Install:
```pythopn
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu
```
# To Use:
Here is how to use this model for zero-shot depth estimation on an image:
```python
output = prediction.squeeze().cpu().numpy()
formatted = (output * 255 / np.max(output)).astype("uint8")
depth = Image.fromarray(formatted)
depth
```
or one can use the pipeline API:
```python
from transformers import pipeline
pipe = pipeline(task="depth-estimation", model="Intel/dpt-swinv2-tiny-256")
result = pipe("http://images.cocodataset.org/val2017/000000181816.jpg")
result["depth"]
```
## Quantitative Analyses
| Model | Square Resolution HRWSI RMSE | Square Resolution Blended MVS REL | Square Resolution ReDWeb RMSE |
| --- | --- | --- | --- |
| BEiT 384-L | 0.068 | 0.070 | 0.076 |
| Swin-L Training 1| 0.0708 | 0.0724 | 0.0826 |
| Swin-L Training 2 | 0.0713 | 0.0720 | 0.0831 |
| ViT-L | 0.071 | 0.072 | 0.082 |
| --- | --- | --- | --- |
| Next-ViT-L-1K-6M | 0.075 |0.073 | 0.085 |
| DeiT3-L-22K-1K | 0.070 | 0.070 | 0.080 |
| ViT-L-Hybrid | 0.075 | 0.075 | 0.085 |
| DeiT3-L | 0.077 | 0.075 | 0.087 |
| --- | --- | --- | --- |
| ConvNeXt-XL | 0.075 | 0.075 | 0.085 |
| ConvNeXt-L | 0.076 | 0.076 | 0.087 |
| EfficientNet-L2| 0.165 | 0.277 | 0.219 |
| --- | --- | --- | --- |
| ViT-L Reversed | 0.071 | 0.073 | 0.081 |
| Swin-L Equidistant | 0.072 | 0.074 | 0.083 |
| --- | --- | --- | --- |
# Ethical Considerations and Limitations
dpt-swinv2-tiny-256 can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
Therefore, before deploying any applications of dpt-swinv2-tiny-256, developers should perform safety testing.
# Caveats and Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
Here are a couple of useful links to learn more about Intel's AI software:
- Intel Neural Compressor [link](https://github.com/intel/neural-compressor)
- Intel Extension for Transformers [link](https://github.com/intel/intel-extension-for-transformers)
# Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2103-13413,
author = {Ren{\'{e}} Reiner Birkl, Diana Wofk, Matthias Muller},
title = {MiDaS v3.1 – A Model Zoo for Robust Monocular Relative Depth Estimation},
journal = {CoRR},
volume = {abs/2307.14460},
year = {2021},
url = {https://arxiv.org/abs/2307.14460},
eprinttype = {arXiv},
eprint = {2307.14460},
timestamp = {Wed, 26 Jul 2023},
biburl = {https://dblp.org/rec/journals/corr/abs-2307.14460.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```