“…Similarly, Hong et al [15] extract peaks exceeding one standard deviation above the mean intensity per azimuth. Kung et al [33] and Mielle et al [47] keep all points exceeding a noise threshold. However, a fixed noise floor with no additional restrictions requires prior knowledge of noise level and does not mitigate multipath reflections.…”
Section: A Filtering and Feature Extraction Of Spinning Radar Datamentioning
This paper presents an accurate, highly efficient, and learning-free method for large-scale odometry estimation using spinning radar, empirically found to generalize well across very diverse environments -outdoors, from urban to woodland, and indoors in warehouses and mines -without changing parameters. Our method integrates motion compensation within a sweep with one-to-many scan registration that minimizes distances between nearby oriented surface points and mitigates outliers with a robust loss function. Extending our previous approach CFEAR, we present an in-depth investigation on a wider range of data sets, quantifying the importance of filtering, resolution, registration cost and loss functions, keyframe history, and motion compensation. We present a new solving strategy and configuration that overcomes previous issues with sparsity and bias, and improves our state-of-the-art by 38%, thus, surprisingly, outperforming radar SLAM and approaching lidar SLAM. The most accurate configuration achieves 1.09% error at 5 Hz on the Oxford benchmark, and the fastest achieves 1.79% error at 160 Hz.
“…Similarly, Hong et al [15] extract peaks exceeding one standard deviation above the mean intensity per azimuth. Kung et al [33] and Mielle et al [47] keep all points exceeding a noise threshold. However, a fixed noise floor with no additional restrictions requires prior knowledge of noise level and does not mitigate multipath reflections.…”
Section: A Filtering and Feature Extraction Of Spinning Radar Datamentioning
This paper presents an accurate, highly efficient, and learning-free method for large-scale odometry estimation using spinning radar, empirically found to generalize well across very diverse environments -outdoors, from urban to woodland, and indoors in warehouses and mines -without changing parameters. Our method integrates motion compensation within a sweep with one-to-many scan registration that minimizes distances between nearby oriented surface points and mitigates outliers with a robust loss function. Extending our previous approach CFEAR, we present an in-depth investigation on a wider range of data sets, quantifying the importance of filtering, resolution, registration cost and loss functions, keyframe history, and motion compensation. We present a new solving strategy and configuration that overcomes previous issues with sparsity and bias, and improves our state-of-the-art by 38%, thus, surprisingly, outperforming radar SLAM and approaching lidar SLAM. The most accurate configuration achieves 1.09% error at 5 Hz on the Oxford benchmark, and the fastest achieves 1.79% error at 160 Hz.
“…Hence, in our investigation of radar we include both feature extraction and quality assessment. Radar-based methods that use alignment quality measures can be categorized into dense methods [16], [34], [35], which operate on raw radar images and do not explicitly perform data association, and sparse methods [1], [36]- [43], which compute alignment quality using keypoint locations, shape and descriptors over a correspondence set. Previous sparse methods use (weighted) Point-to-Point [39], [41], [42], Pointto-distribution [43] and Point-to-Line [1] metrics.…”
Section: Feature Extraction and Quality Assessment For Spinning Radarmentioning
confidence: 99%
“…Radar-based methods that use alignment quality measures can be categorized into dense methods [16], [34], [35], which operate on raw radar images and do not explicitly perform data association, and sparse methods [1], [36]- [43], which compute alignment quality using keypoint locations, shape and descriptors over a correspondence set. Previous sparse methods use (weighted) Point-to-Point [39], [41], [42], Pointto-distribution [43] and Point-to-Line [1] metrics. Key points can be extracted via SURF, blob detection [36], gradientbased feature detectors [41], [42], by a set of oriented surface points [1] or distributions [43] using a grid-based approach, or by semi-supervised [39] and unsupervised [39], [40] deep learning methods.…”
Section: Feature Extraction and Quality Assessment For Spinning Radarmentioning
confidence: 99%
“…Previous sparse methods use (weighted) Point-to-Point [39], [41], [42], Pointto-distribution [43] and Point-to-Line [1] metrics. Key points can be extracted via SURF, blob detection [36], gradientbased feature detectors [41], [42], by a set of oriented surface points [1] or distributions [43] using a grid-based approach, or by semi-supervised [39] and unsupervised [39], [40] deep learning methods.…”
Section: Feature Extraction and Quality Assessment For Spinning Radarmentioning
Robust perception is an essential component to enable long-term operation of mobile robots. It depends on failure resilience through reliable sensor data and pre-processing, as well as failure awareness through introspection, for example the abaility to self-assess localization performance. This paper presents CorAl: a principled, intuitive, and generalizable method to measure the quality of alignment between pairs of point clouds, which learns to detect alignment errors in a selfsupervised manner. CorAl compares the differential entropy in the point clouds separately with the entropy in their union to account for entropy inherent to the scene. By making use of dual entropy measurements, we obtain a quality metric that is highly sensitive to small alignment errors and still generalizes well to unseen environments. In this work, we extend our previous work on lidar-only CorAl to radar data by proposing a two-step filtering technique that produces highquality point clouds from noisy radar scans. Thus we target robust perception in two ways: by introducing a method that introspectively assesses alignment quality, and applying it to an inherently robust sensor modality. We show that our filtering technique combined with CorAl can be applied to the problem of alignment classification, and that it detects small alignment errors in urban settings with up to 98% accuracy, and with up to 96% if trained only in a different environment. Our lidar and radar experiments demonstrate that CorAl outperforms previous methods both on the ETH lidar benchmark, which includes several indoor and outdoor environments, and the large-scale Oxford and MulRan radar data sets for urban traffic scenarios The results also demonstrate that CorAl generalizes very well across substantially different environments without the need of retraining.
“…Other work that has made use of radar as a navigation sensor includes that by Park et al [10] which applies the Fourier-Mellin Transform to log-polar images computed from downsampled Cartesian images, and Kung et al [11] which uses a normal distribution transform typically applied to 2D and 3D LiDAR. Adolfsson et al [12] employ filtering to retain the strongest azimuthal returns and compute a sparse set of oriented surface points, while Hong et al [2] use vision-based features and graph-matching in a radar context.…”
This paper presents a method that leverages vehicle motion constraints to refine data associations in a point-based radar odometry system. By using the strong prior on how a nonholonomic robot is constrained to move smoothly through its environment, we develop the necessary framework to estimate ego-motion from a single landmark association rather than considering all of these correspondences at once. This allows for informed outlier detection of poor matches that are a dominant source of pose estimate error. By refining the subset of matched landmarks, we see an absolute decrease of 2.15 % (from 4.68 % to 2.53 %) in translational error, approximately halving the error in odometry (reducing by 45.94 %) than when using the full set of correspondences. This contribution is relevant to other pointbased odometry implementations that rely on a range sensor and provides a lightweight and interpretable means of incorporating vehicle dynamics for ego-motion estimation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.