2020 IEEE International Radar Conference (RADAR) 2020
DOI: 10.1109/radar42522.2020.9114871
|View full text |Cite
|
Sign up to set email alerts
|

Multi-Modal Cross Learning for Improved People Counting using Short-Range FMCW Radar

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
9
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 24 publications
(11 citation statements)
references
References 16 publications
0
9
0
Order By: Relevance
“…Various applications rely on multi-modal sensory input as shown in [5]- [14]. A multi-modal approach for gesture recognition using video, depth camera, and optical flow is shown in [6].…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…Various applications rely on multi-modal sensory input as shown in [5]- [14]. A multi-modal approach for gesture recognition using video, depth camera, and optical flow is shown in [6].…”
Section: Related Workmentioning
confidence: 99%
“…Stacked autoencoders are used in [14] for a cross-modal audio mapping and a gesture-to-audio conversion task. One example for a radar sensor in combination with a camera for people counting is shown in [5]. This approach uses 60 GHz radar data in form of range-angle images and the camera images are preprocessed to density heatmaps.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…In this application, the complementary nature of radar and optical data is not used per se (the image registration is based on features that are present in both modalities). Only in recent years, methods exploiting the complementary nature of radar and one or more other modalities have been reported [14][15][16][17][18]. Some of these methods are particularly focussed on data fusion and less on cross learning, that is, all modalities are always assumed to be present [14,15].…”
Section: Introductionmentioning
confidence: 99%