2020 IEEE/ION Position, Location and Navigation Symposium (PLANS) 2020
DOI: 10.1109/plans46316.2020.9109951
|View full text |Cite
|
Sign up to set email alerts
|

Look Around You: Sequence-based Radar Place Recognition with Learned Rotational Invariance

Abstract: This paper details an application which yields significant improvements to the adeptness of place recognition with Frequency-Modulated Continuous-Wave radar -a commercially promising sensor poised for exploitation in mobile autonomy. We show how a rotationally-invariant metric embedding for radar scans can be integrated into sequence-based trajectory matching systems typically applied to videos taken by visual sensors. Due to the complete horizontal field of view inherent to the radar scan formation process, w… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
29
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
4

Relationship

2
7

Authors

Journals

citations
Cited by 36 publications
(29 citation statements)
references
References 32 publications
0
29
0
Order By: Relevance
“…In Sȃftescu et al (2020), NetVLAD (Arandjelovic et al, 2016) was used to achieve radar-to-radar (R2R) place recognition. Then, the researchers used sequential radar scans to improve the localization performance (Gadd et al, 2020). In this paper, a deep neural network is also proposed to extract feature embeddings, but the proposed framework aims at heterogeneous place recognition.…”
Section: Radar-based Mapping and Localizationmentioning
confidence: 99%
“…In Sȃftescu et al (2020), NetVLAD (Arandjelovic et al, 2016) was used to achieve radar-to-radar (R2R) place recognition. Then, the researchers used sequential radar scans to improve the localization performance (Gadd et al, 2020). In this paper, a deep neural network is also proposed to extract feature embeddings, but the proposed framework aims at heterogeneous place recognition.…”
Section: Radar-based Mapping and Localizationmentioning
confidence: 99%
“…The supervised learning framework presented in Reference [8,9] and which our work extends uses rotationally-invariant feature extraction and triplet-mining but does not solve for the rigid-body pose of the sensor. The cross-modal radar-satellite works presented in Reference [10,28] do solve for the metric pose.…”
Section: Radar-based Mapping and Localisationmentioning
confidence: 99%
“…Indeed, there is a burgeoning interest in exploiting FMCW radar to enable robust mobile autonomy, including ego-motion estimation [2][3][4][5][6][7], localisation [7][8][9][10][11], Simultaneous Localisation and Mapping (SLAM) [12], and scene understanding [13][14][15]. However, despite radar's promise to deliver such capabilities, the study of these tasks is only mature for cameras and Light Detection and Rangings (LiDARs), and relatively little attention has been paid to radar for the same application.…”
Section: Introductionmentioning
confidence: 99%
“…Deep distance learning is of great significance in learning visual similarity. Recently, a specially designed triplet loss combined with CNN feature extraction has achieved good performance in face recognition [33], person re-identification [34,35], camera-LiDAR place recognition [36] and radar place recognition [37][38][39] tasks. The main concept behind the triplet loss is to minimize the distances of the same category images and maximize those of other categories in the Euclidean space.…”
Section: Introductionmentioning
confidence: 99%