The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2019 IEEE Radar Conference (RadarConf) 2019
DOI: 10.1109/radar.2019.8835755
|View full text |Cite
|
Sign up to set email alerts
|

Advanced Radar Micro-Doppler Simulation Environment for Human Motion Applications

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
3
3

Relationship

2
4

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 8 publications
0
4
0
Order By: Relevance
“…In [138], radar measurements are enriched with GNSS ground-truth for ML-based VRU recognition. Furthermore, synthesizing VRU radar responses with radar simulators has been presented with the motion ground-truth obtained from kinematic models [139], animations [140], [141], or Kinect data [142], [143].…”
Section: E Machine Learning and Automotive Radarmentioning
confidence: 99%
“…In [138], radar measurements are enriched with GNSS ground-truth for ML-based VRU recognition. Furthermore, synthesizing VRU radar responses with radar simulators has been presented with the motion ground-truth obtained from kinematic models [139], animations [140], [141], or Kinect data [142], [143].…”
Section: E Machine Learning and Automotive Radarmentioning
confidence: 99%
“…The authors of [14] published source code, which is only suitable for static scenes. Furthermore, we managed to reproduce the method explained in [16] for exporting moving pointtargets from Blender using "speed vector pass". However, the point-target information was accurate enough only for 2D movements but not for complex 3D motions.…”
Section: Distinction From Related Workmentioning
confidence: 99%
“…It is worth pointing out that the graphics in the scene were static and only the Radar sensor was allowed to move. In [16], the authors overcame this problem by using another rendered image called "speed vector pass", which contains information about pixels moving in two dimensions (x and y). The authors were able to generate spectrogram from a synthetically generated human figure that was waving both hands.…”
Section: Introductionmentioning
confidence: 99%
“…Approaches can be distinguished with respect to the underlying human target model, which can be based on a set of char- [12] GAN - [13] GAN - [14] Simulation MoCap [16], [17] Simulation Kinect [15], [21] Simulation Kinect [18], [19] Simulation Blender [20] Simulation Blender [23] Domain Transfer Video (Mono) This work Simulation Video (Stereo) 1 DA: direct augmentation, 2 PT: pre-training, 3 CD: cross-domain training acteristic skeletal keypoints, whose positions are derived from motion capture data [14] or Kinect sensors [15], [16], [17], and models that derive scattering centers from 3D body models generated e.g. with help of computer graphics software [18], [19], [20]. Starting from these simulation approaches, some papers recently explored the potential of simulated radar data for augmentation.…”
Section: Introductionmentioning
confidence: 99%