Papers
arxiv:2309.17260

PlaceNav: Topological Navigation through Place Recognition

Published on Sep 29, 2023
Authors:
,
,
,

Abstract

Recent results suggest that splitting topological navigation into robot-independent and robot-specific components improves navigation performance by enabling the robot-independent part to be trained with data collected by different robot types. However, the navigation methods are still limited by the scarcity of suitable training data and suffer from poor computational scaling. In this work, we present PlaceNav, subdividing the robot-independent part into navigation-specific and generic computer vision components. We utilize visual place recognition for the subgoal selection of the topological navigation pipeline. This makes subgoal selection more efficient and enables leveraging large-scale datasets from non-robotics sources, increasing training data availability. Bayesian filtering, enabled by place recognition, further improves navigation performance by increasing the temporal consistency of subgoals. Our experimental results verify the design and the new model obtains a 76% higher success rate in indoor and 23% higher in outdoor navigation tasks with higher computational efficiency.

Community

Proposes PlaceNav: sub-dividing robot independent part of topological navigation into generic CV and navigation-specific parts; use VPR for sub-goal selection (uses general navigation model - GNM for goal reaching policy); discrete Bayes filter alternates VPR and motion model. Instead of predicting temporal distance (from current, map node/database pair), use global descriptors (NN search) for getting sub-goals. Uses Bayesian filtering (posterior through measurement and prediction); measurement belief function has exponential of negative L2 distance between image embeddings. Uses CosPlace (conv encoder and GeM pooling with classification loss for spatial groups) method (EfficientNet-B0 with SF-XL images resized to 85x85). Better success rates on indoor datasets (with Bayesian filtering instead of sliding window). Offline VPR evaluation is better for place recognition models (best for CosPlace, but CosPlace-LR used is better than temporal sub-goal sampling of GNM). From Tampere University.

Links: website, GitHub

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2309.17260 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2309.17260 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2309.17260 in a Space README.md to link it from this page.

Collections including this paper 1