2022
DOI: 10.3390/s22114208
|View full text |Cite
|
Sign up to set email alerts
|

Towards Deep Radar Perception for Autonomous Driving: Datasets, Methods, and Challenges

Abstract: With recent developments, the performance of automotive radar has improved significantly. The next generation of 4D radar can achieve imaging capability in the form of high-resolution point clouds. In this context, we believe that the era of deep learning for radar perception has arrived. However, studies on radar deep learning are spread across different tasks, and a holistic overview is lacking. This review paper attempts to provide a big picture of the deep radar perception stack, including signal processin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
26
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
2
2

Relationship

0
10

Authors

Journals

citations
Cited by 62 publications
(29 citation statements)
references
References 216 publications
0
26
0
Order By: Relevance
“…A common theme that runs through the conventional approaches to sensing for autonomous operation in complex natural environments has been to counter the lack of controllability and predictability with the collection of large amounts of data. Examples of this approach are the use of vision [49], including monocular [49], stereo [50], and depth cameras (red green blue-depth (RGB-D) cameras, [72]), as well as radar [71] and laser scanning (LIDAR, [38]). The common goal behind applying any of these sensing methods is usually to capture sensory data that would-at least in principle-suffice for building a digital model of the environment that is detailed enough to support planing of autonomous actions.…”
Section: Conventional Approachesmentioning
confidence: 99%
“…A common theme that runs through the conventional approaches to sensing for autonomous operation in complex natural environments has been to counter the lack of controllability and predictability with the collection of large amounts of data. Examples of this approach are the use of vision [49], including monocular [49], stereo [50], and depth cameras (red green blue-depth (RGB-D) cameras, [72]), as well as radar [71] and laser scanning (LIDAR, [38]). The common goal behind applying any of these sensing methods is usually to capture sensory data that would-at least in principle-suffice for building a digital model of the environment that is detailed enough to support planing of autonomous actions.…”
Section: Conventional Approachesmentioning
confidence: 99%
“…In autonomous driving, LiDARs are used to acquire dense point clouds, and each point is generally represented by four values: 3-D coordinates in space and signal intensity. RAD tensors and sparse point clouds are two types of data representations for mm-wave radars [45]. RAD tensors are generated by performing three times fast Fourier transforms (FFTs) on ADC signals in the range, angle, and Doppler velocity dimensions, respectively.…”
Section: A Physical Radarsmentioning
confidence: 99%
“…Because of the unique nature of radar signals and the relative lack of publicly accessible datasets [63] that include both camera and radar datasets [21,[64][65][66][67][68] under foggy weather conditions, the scope of AV research with respect to foggy weather conditions has been severely constrained. For autonomous vehicle research, there are only a very small number of datasets available [67] and [21]which combine information from cameras and radar when undertaken in foggy weather conditions.…”
Section: Datasets and Semantic Labelsmentioning
confidence: 99%