---
library_name: align3r
tags:
- image-to-3d
- model_hub_mixin
- pytorch_model_hub_mixin
---
[**Align3R: Aligned Monocular Depth Estimation for Dynamic Videos**](https://arxiv.org/abs/2412.03079)
[*Jiahao Lu*\*](https://github.com/jiah-cloud),
[*Tianyu Huang*\*](https://scholar.google.com/citations?view_op=list_works&hl=en&user=nhbSplwAAAAJ),
[*Peng Li*](https://scholar.google.com/citations?user=8eTLCkwAAAAJ&hl=zh-CN),
[*Zhiyang Dou*](https://frank-zy-dou.github.io/),
[*Cheng Lin*](https://clinplayer.github.io/),
[*Zhiming Cui*](),
[*Zhen Dong*](https://dongzhenwhu.github.io/index.html),
[*Sai-Kit Yeung*](https://saikit.org/index.html),
[*Wenping Wang*](https://scholar.google.com/citations?user=28shvv0AAAAJ&hl=en),
[*Yuan Liu*](https://liuyuan-pal.github.io/)
Arxiv, 2024.
**Align3R** estimates temporally consistent video depth, dynamic point clouds, and camera poses from monocular videos.
```bibtex
@article{lu2024align3r,
title={Align3R: Aligned Monocular Depth Estimation for Dynamic Videos},Jiahao Lu, Tianyu Huang, Peng Li, Zhiyang Dou, Cheng Lin, Zhiming Cui, Zhen Dong, Sai-Kit Yeung, Wenping Wang, Yuan Liu
author={Lu, Jiahao and Huang, Tianyu and Li, Peng and Dou, Zhiyang and Lin, Cheng and Cui, Zhiming and Dong, Zhen and Yeung, Sai-Kit and Wang, Wenping and Liu,Yuan},
journal={arXiv preprint arXiv:2412.03079},
year={2024}
}
```
### How to use
First, [install Align3R](https://github.com/jiah-cloud/Align3R).
To load the model:
```python
from dust3r.model import AsymmetricCroCo3DStereo
import torch
model = AsymmetricCroCo3DStereo.from_pretrained("cyun9286/Align3R_DepthPro_ViTLarge_BaseDecoder_512_dpt")
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)