2020 IEEE International Conference on Robotics and Automation (ICRA) 2020
DOI: 10.1109/icra40945.2020.9196835
|View full text |Cite
|
Sign up to set email alerts
|

Under the Radar: Learning to Predict Robust Keypoints for Odometry Estimation and Metric Localisation in Radar

Abstract: This paper presents a self-supervised framework for learning to detect robust keypoints for odometry estimation and metric localisation in radar. By embedding a differentiable point-based motion estimator inside our architecture, we learn keypoint locations, scores and descriptors from localisation error alone. This approach avoids imposing any assumption on what makes a robust keypoint and crucially allows them to be optimised for our application. Furthermore the architecture is sensor agnostic and can be app… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
122
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 93 publications
(122 citation statements)
references
References 21 publications
0
122
0
Order By: Relevance
“…However, each of these methods relies on either hand-crafted feature detectors and descriptors or cumbersome phase correlation techniques. Barnes and Posner [9] showed that learned features can result in superior radar odometry performance. Their work currently represents the state of the art for point-based radar odometry.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…However, each of these methods relies on either hand-crafted feature detectors and descriptors or cumbersome phase correlation techniques. Barnes and Posner [9] showed that learned features can result in superior radar odometry performance. Their work currently represents the state of the art for point-based radar odometry.…”
Section: Related Workmentioning
confidence: 99%
“…Previous works in this area have made significant progress towards radar-based odometry [2-4, 9, 11, 14-16, 26, 34, 39] and place recognition [20,22,31,43,46]. However, previous approaches to radar odometry have either relied on handcrafted feature extraction [2-4, 14-16, 26, 34], correlative scan matching [11,39], or a (self-)supervised learning algorithm [9,11] that relies on trajectory groundtruth. Barnes and Posner [9] previously showed that learned features have the potential to outperform hand-crafted features.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…The local descriptors are then fed to a PointNetVLAD layer with attention to learn a global descriptor for place recognition. Barnes and Posner [3] learned key-points from radar images for odometry, and pooled local descriptors across spatial dimensions into a global one per image for place recognition.…”
Section: Deep Learning For Point-cloud Place Recognitionmentioning
confidence: 99%
“…We set λ = 10 in our experiments. The points extraction step in Algorithm 1 is fully differentiable, allowing the loss in Equation ( 5) to not only optimise the DCP network, but also fine-tune the pretrained network for f o , without needing to also apply the loss in Equation (3). The data flow for pose estimation is shown on the left of Figure 7.…”
Section: Learning Pose Estimationmentioning
confidence: 99%