Papers
arxiv:2109.08238

Habitat-Matterport 3D Dataset (HM3D): 1000 Large-scale 3D Environments for Embodied AI

Published on Sep 16, 2021
Authors:
,
,
,
,
,
,
,
,
,
,

Abstract

We present the Habitat-Matterport 3D (HM3D) dataset. HM3D is a large-scale dataset of 1,000 building-scale 3D reconstructions from a diverse set of real-world locations. Each scene in the dataset consists of a textured 3D mesh reconstruction of interiors such as multi-floor residences, stores, and other private indoor spaces. HM3D surpasses existing datasets available for academic research in terms of physical scale, completeness of the reconstruction, and visual fidelity. HM3D contains 112.5k m^2 of navigable space, which is 1.4 - 3.7x larger than other building-scale datasets such as MP3D and Gibson. When compared to existing photorealistic 3D datasets such as Replica, MP3D, Gibson, and ScanNet, images rendered from HM3D have 20 - 85% higher visual fidelity w.r.t. counterpart images captured with real cameras, and HM3D meshes have 34 - 91% fewer artifacts due to incomplete surface reconstruction. The increased scale, fidelity, and diversity of HM3D directly impacts the performance of embodied AI agents trained using it. In fact, we find that HM3D is `pareto optimal' in the following sense -- agents trained to perform PointGoal navigation on HM3D achieve the highest performance regardless of whether they are evaluated on HM3D, Gibson, or MP3D. No similar claim can be made about training on other datasets. HM3D-trained PointNav agents achieve 100% performance on Gibson-test dataset, suggesting that it might be time to retire that episode dataset.

Community

Proposes Habitat-Matterport 3D (HM3D) dataset: larger scale, more photorealistic than Gibson and ScanNet (1000 near complete high-fidelity reconstructions of building); diverse enough to show (PointGoal navigation) agents trained on it generalize to other datasets; larger area and high navigation complexity. Data collected through diverse domains (residences, stores, workplaces) from locations around the world; 3D reconstruction of RGBD scans from Matterport Pro2 depth sensor. Has most scenes with low defects from artifacts (proportion of invalid depth values exceeding a percentage threshold) compared to Gibson, RoboThor, and (fares much better than) ScanNet. High visual fidelity through FID and KID (from MMD GANs) metrics. PointNav (PointGoal navigation task) benchmark: ResNet-50 backbone for visual features, MLP for GPS and compass (heading), LSTM state encoder (aggregating over time), FC (MLP) gives predicted state values (value function) and action logins (policy); train using DD-PPO for 1.5 B frames. Better performance than MP3D (older MatterPort 3D dataset) and Gibson when using depth or RGB; best validation across datasets during training. Appendix contains geographical coverage (scans from countries), more results, example scenes (top view, cross section, and ego-centric views), PointNav qualitative results. From Meta, UT Austin, Georgia Tech (Dhruv Batra), Simon Fraser, Cornell.

Links: Website, Blog, GitHub (matterport version)

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2109.08238 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2109.08238 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2109.08238 in a Space README.md to link it from this page.

Collections including this paper 1