|
--- |
|
license: apache-2.0 |
|
--- |
|
# EVL: Egocentric Voxel Lifting |
|
|
|
[[paper](https://arxiv.org/abs/2406.10224)] |
|
|
|
## Intro |
|
|
|
Egocentric Voxel Lifting (EVL) is a baseline model for 3D object detection and surface reconstruction. |
|
EVL is trained on a large simulation dataset [Aria Synthetic Enviroments](https://www.projectaria.com/datasets/ase/). |
|
EVL leverages egocentric modalities and inherits foundational capabilities from 2D foundation models. |
|
|
|
<img src="assets/efm3d.png"> |
|
|
|
## Usage |
|
|
|
Please refer to the [EFM3D](https://github.com/facebookresearch/efm3d) repo for installation and how to use the model |
|
|
|
## Citing EVL |
|
|
|
``` |
|
@article{straub2024efm3d, |
|
title={EFM3D: A Benchmark for Measuring Progress Towards 3D Egocentric Foundation Models}, |
|
author={Straub, Julian and DeTone, Daniel and Shen, Tianwei and Yang, Nan and Sweeney, Chris and Newcombe, Richard}, |
|
journal={arXiv preprint arXiv:2406.10224}, |
|
year={2024} |
|
} |
|
``` |
|
|
|
## License |
|
|
|
EVL is released by Meta under the [Apache 2.0 license](LICENSE). |
|
|