2021 18th Conference on Robots and Vision (CRV) 2021
DOI: 10.1109/crv52889.2021.00021
|View full text |Cite
|
Sign up to set email alerts
|

RADDet: Range-Azimuth-Doppler based Radar Object Detection for Dynamic Road Users

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
28
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 57 publications
(34 citation statements)
references
References 16 publications
0
28
0
Order By: Relevance
“…While they provide accurate range and velocity, they suffer from a low azimuth resolution leading to ambiguity in separating close objects. Recent datasets include processed radar representations such as the entire Range-Azimuth-Doppler (RAD) tensor [31], [43] or single views of this tensor -either Range-Azimuth (RA) [1], [38], [17], [41], [27] or Range-Doppler (RD) [27]. These representations require large bandwidth to be transmitted as well as large memory storage.…”
Section: Radar Backgroundmentioning
confidence: 99%
See 2 more Smart Citations
“…While they provide accurate range and velocity, they suffer from a low azimuth resolution leading to ambiguity in separating close objects. Recent datasets include processed radar representations such as the entire Range-Azimuth-Doppler (RAD) tensor [31], [43] or single views of this tensor -either Range-Azimuth (RA) [1], [38], [17], [41], [27] or Range-Doppler (RD) [27]. These representations require large bandwidth to be transmitted as well as large memory storage.…”
Section: Radar Backgroundmentioning
confidence: 99%
“…Specific architectures have been designed to ingest aggregated views of the RAD tensor to detect objects in the RA view [23], [11]. The entire tensor has also been considered, either for object detection in both RA and RD views [43] or for object localisation in the camera image [32].…”
Section: Radar Backgroundmentioning
confidence: 99%
See 1 more Smart Citation
“…Some sets expose low sample rates and imbalanced object class distributions whereas other records do not include Doppler information [15], [16], depriving themselves of radars unique feature altogether. Only recently, first radar datasets featuring annotations on the frequency level were made publicly available [17], [18], [19] but it remains to be seen if these are going to have a similar impact on the community and will incite comparable research efforts as the famous KITTI [20] and Cityscapes [21] benchmarks did for vision-based scene-understanding. Both aspects, the tedious annotation of radar data followed by an inevitable inflow of misinformation and the preferable elimination of equivocation call for completely different approaches, establishing the subfield of self-supervised learning.…”
Section: B Foregoing Explicit Data Annotationsmentioning
confidence: 99%
“…As no resampling is applied to correct for vastly misclassified first camera tokens, the importance of drawing high-quality samples cannot be overstated. Introducing a temperature parameter into the softmax normalization of the raw transformer logits akin to equation (18) and truncating the tails of the PMF by topk selection, enables so-called nucleus sampling [85] of the camera tokens. More precisely, shrinking the sample space to only a few categories K K comprising the bulk of the probability mass increases both the sample quality and reliability by preventing low-probability outcomes.…”
Section: Conditional Synthesis Of Camera Symbolsmentioning
confidence: 99%