Papers
arxiv:1606.05830

Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age

Published on Jun 19, 2016
Authors:
,
,
,
,
,
,

Abstract

Simultaneous Localization and Mapping (SLAM)consists in the concurrent construction of a model of the environment (the map), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications, and witnessing a steady transition of this technology to industry. We survey the current state of SLAM. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. This paper simultaneously serves as a position paper and tutorial to those who are users of SLAM. By looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors' take on two questions that often animate discussions during robotics conferences: Do robots need SLAM? and Is SLAM solved?

Community

Overview of SOTA SLAM methods and pipelines (SLAM: Simultaneous Localization and Mapping - estimate robot state and environment map jointly) from 2016. Classical age SLAM (1986 to 2004) involved extended Kalman filters (EKFs), Rao-Backwellised Particle filters (RBPFs), and maximum likelihood estimation (MLE). Algorithmic analysis age (2004 to 2015) involved convergence, consistency, etc. SLAM relates to: Odometry and loop closure, sensor fusion under challenging conditions, visual inertial navigation (VINs) systems, visual place recognition. Requirement based on type of robot, operation environment, and performance requirements. Architecture: front-end takes sensor data and does feature extraction and association; back-end does maximum a posteriori (MAP) estimation through an optimization method (over a factor graph) by taking negative log (assuming Gaussian/normal noise) - non-linear least squares problem (sparse and can be solved using solvers like GTSAM, G2O, Ceres, iSAM, SLAM++, etc.). Like bundle adjustment (BA) in SfM but can include models outside projective geometry (sensor and motion models). Front end takes care of feature/landmark extraction in the map, loop closure detections, data associations, and can also provide initialization for landmarks (by triangulation). Robustness: incorrect data association can be addressed through RANSAC, but back-ends are still not failure safe (and failure aware); hardware failure (retaining operation under degraded sensor), non-rigid maps, and automatic parameter tuning still under research. Long term SLAM (unbounded factor graph through exploration) does (graph) sparsification or computation distribution over multiple robots; map representations for perpetual learning, forgetting and remembering map portions, and robust distributed mapping are under research. Metric map models could be landmark/feature based (keypoint centric mapping), locally dense (point cloud, polygon meshes), boundary based (surface representations like SDFs), or even 3D object representations. Expressive, optimal, and adaptive representations still a challenge. Semantic information helps SLAM mapping and optimization. Semantic metric fusion, adaptive mapping, and knowledge of environment in semantic (and metric) map updates still under research. Active SLAM controls robot’s motion to minimize uncertainty of map representation and localization; exploration for information gain (get vantage points and explore one with highest utility); forecasting considering all actions seems intractable. TOF sensors (range and LiDAR), and event cameras are new sensor modalities. Deep learning is mostly being used for perception (front end). SLAM review from ETHz, MIT (Luca Carlone), University of Adelaide, University of Zurich.

Links: Website, GitHub

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/1606.05830 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/1606.05830 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/1606.05830 in a Space README.md to link it from this page.

Collections including this paper 1