Abstract:The unique properties of radar sensors, such as their robustness to adverse weather conditions, make them an important part of the environment perception system of autonomous vehicles. One of the first steps during the processing of radar point clouds is often the detection of clutter, i.e. erroneous points that do not correspond to real objects. Another common objective is the semantic segmentation of moving road users. These two problems are handled strictly separate from each other in literature. The employ… Show more
“…Lidars capture limited points from the side surfaces of these vehicles when they are directly ahead. By contrast, radars yield dense uniform returns from these agents, owing to their large metallic bodies and underbody reflections [68], as discussed in Section VIII. Nevertheless, the associator in a late fusion system may routinely disregard shape features from radar model outputs due to the absence of in-context learning.…”
Section: A Detection-based Tracking 1) Late Fusionmentioning
confidence: 99%
“…Multipath effects for autonomous vehicles can be classified into three types: double bounce, underbody reflection, and mirrored ghost detections [68]. Radar double bounces occur due to two back-and-forth reflections between an object and the radar-equipped ego vehicle, resulting in false radar observations at double the range and velocity relative to the real object.…”
Section: Radar: Challenges and Opportunities A Multipath And Cluttermentioning
Radar is a key component of the suite of perception sensors used for safe and reliable navigation of autonomous vehicles. Its unique capabilities include high-resolution velocity imaging, detection of agents in occlusion and over long ranges, and robust performance in adverse weather conditions. However, the usage of radar data presents some challenges: it is characterized by low resolution, sparsity, clutter, high uncertainty, and lack of good datasets. These challenges have limited radar deep learning research. As a result, current radar models are often influenced by lidar and vision models, which are focused on optical features that are relatively weak in radar data, thus resulting in under-utilization of radar's capabilities and diminishing its contribution to autonomous perception. This review seeks to encourage further deep learning research on autonomous radar data by 1) identifying key research themes, and 2) offering a comprehensive overview of current opportunities and challenges in the field. Topics covered include early and late fusion, occupancy flow estimation, uncertainty modeling, and multipath detection. The paper also discusses radar fundamentals and data representation, presents a curated list of recent radar datasets, and reviews state-of-the-art lidar and vision models relevant for radar research.
“…Lidars capture limited points from the side surfaces of these vehicles when they are directly ahead. By contrast, radars yield dense uniform returns from these agents, owing to their large metallic bodies and underbody reflections [68], as discussed in Section VIII. Nevertheless, the associator in a late fusion system may routinely disregard shape features from radar model outputs due to the absence of in-context learning.…”
Section: A Detection-based Tracking 1) Late Fusionmentioning
confidence: 99%
“…Multipath effects for autonomous vehicles can be classified into three types: double bounce, underbody reflection, and mirrored ghost detections [68]. Radar double bounces occur due to two back-and-forth reflections between an object and the radar-equipped ego vehicle, resulting in false radar observations at double the range and velocity relative to the real object.…”
Section: Radar: Challenges and Opportunities A Multipath And Cluttermentioning
Radar is a key component of the suite of perception sensors used for safe and reliable navigation of autonomous vehicles. Its unique capabilities include high-resolution velocity imaging, detection of agents in occlusion and over long ranges, and robust performance in adverse weather conditions. However, the usage of radar data presents some challenges: it is characterized by low resolution, sparsity, clutter, high uncertainty, and lack of good datasets. These challenges have limited radar deep learning research. As a result, current radar models are often influenced by lidar and vision models, which are focused on optical features that are relatively weak in radar data, thus resulting in under-utilization of radar's capabilities and diminishing its contribution to autonomous perception. This review seeks to encourage further deep learning research on autonomous radar data by 1) identifying key research themes, and 2) offering a comprehensive overview of current opportunities and challenges in the field. Topics covered include early and late fusion, occupancy flow estimation, uncertainty modeling, and multipath detection. The paper also discusses radar fundamentals and data representation, presents a curated list of recent radar datasets, and reviews state-of-the-art lidar and vision models relevant for radar research.
“…In the latter case, since ghost detection has similar dynamics to the real target, it is difficult to eliminate them in the traditional detection pipeline. The multi-path effect can be classified into three types [196]. The first type is the reflection between ego-vehicle and targets.…”
Section: Ghost Object Detectionmentioning
confidence: 99%
“…Unlike clutter, ghost objects cannot be filtered by temporal tracking because they have the same kinematic properties as real targets. Instead, they can be detected by geometric methods [196,198]. With a radar ghost dataset, it is also possible to train a neural network for ghost detection, such as PointNet-based methods [89] and PointNet++-based methods [197,199].…”
With recent developments, the performance of automotive radar has improved significantly. The next generation of 4D radar can achieve imaging capability in the form of high-resolution point clouds. In this context, we believe that the era of deep learning for radar perception has arrived. However, studies on radar deep learning are spread across different tasks, and a holistic overview is lacking. This review paper attempts to provide a big picture of the deep radar perception stack, including signal processing, datasets, labelling, data augmentation, and downstream tasks such as depth and velocity estimation, object detection, and sensor fusion. For these tasks, we focus on explaining how the network structure is adapted to radar domain knowledge. In particular, we summarise three overlooked challenges in deep radar perception, including multi-path effects, uncertainty problems, and adverse weather effects, and present some attempts to solve them.
“…Therefore, automotive radars always operate in the presence of elevation multipath [24], [27][28][29]. Elevation multipath may also occur in tunnels, below bridges, or over-path road signs and constructions [30]. Horizontal multipath occurs when driving near guardrails, buildings, and adjacent vehicles [31,32].…”
Autonomous driving and advanced active safety features require accurate high-resolution sensing capabilities.Automotive radars are the key component of the vehicle sensing suit. However, when these radars operate in proximity to flat surfaces, such as roads and guardrails, they experience a multipath phenomenon that can degrade the accuracy of the direction-of-arrival (DOA) estimation. Presence of multipath leads to misspecification in the radar data model, resulting in estimation performance degradation, which cannot be reliably predicted by conventional performance bounds. In this paper, the misspecified Cramér-Rao bound (MCRB), which accounts for model misspecification, is derived for the problem of DOA estimation in the presence of multipath which is ignored by the estimator. Analytical relations between the MCRB and the Cramér-Rao bound are established, and the DOA estimation performance degradation due to multipath is investigated. The results show that the MCRB reliably predicts the asymptotic performance of the misspecified maximum-likelihood estimator and therefore, can serve as an efficient tool for automotive radar performance evaluation and system design.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.