Abstract:Safety critical systems in Advanced Driver Assistance Systems (ADAS) depend on multiple sensors to perceive the environment in which they operate. Radar sensors provide many advantages and complementary capabilities to other available sensors but are not without their own shortcomings. Performance of radar perception algorithms still pose many challenges, one of which is in object detection and classification. In order to increase redundancy in ADAS, the ability for a radar system to detect and classify object… Show more
“…A shooting and bouncing ray solver was used in [54] to obtain the radar cross section of a human engaged in a series of dynamic motions. In this paper, we use Ansys' High Frequency Structure Simulator (HFSS) Shooting and Bouncing Rays (SBR+) solver [59], [61]- [64].…”
Detection and classification of vulnerable road users (VRUs) such as pedestrians and cyclists is a key requirement for the realization of fully autonomous vehicles. Radar-based classification of VRUs can be achieved by exploiting differences in the micro-Doppler signatures associated with VRUs. Specifically, machine learning (ML) algorithms can be trained to classify VRUs using the spectral content of radar signals. The performance of these models depends on the quality and quantity of the data used during the training process. Currently, data collection is typically done through measurements or low fidelity physics, primitive-based simulations. The feasibility of carrying out measurements to collect training data is typically limited by the vast amounts of data required and practicality issues when using VRUs like animals. In this paper, we present a computationally efficient, high fidelity physics-based simulation workflow that can be used to obtain a large quantity of spectrograms from the micro-Doppler signatures of VRUs. The simulations are conducted on full-scale VRU models with a 77 GHz, frequency-modulated continuouswave (FMCW) radar sensor model. Here, we collect the spectrograms of 4 targets; car, pedestrian, cyclist and dog at different speeds and angles-of-arrival. This data is then used to train a 5-layer convolutional neural network (CNN) that achieves nearly 100% classification accuracy after 5 epochs. Studies are conducted to investigate the impact of training data size, velocity and observation time window size on the accuracy of the CNN. Results from this study demonstrate how an accuracy of 95% can be realized using spectrograms obtained over a 0.2 s time window.
“…A shooting and bouncing ray solver was used in [54] to obtain the radar cross section of a human engaged in a series of dynamic motions. In this paper, we use Ansys' High Frequency Structure Simulator (HFSS) Shooting and Bouncing Rays (SBR+) solver [59], [61]- [64].…”
Detection and classification of vulnerable road users (VRUs) such as pedestrians and cyclists is a key requirement for the realization of fully autonomous vehicles. Radar-based classification of VRUs can be achieved by exploiting differences in the micro-Doppler signatures associated with VRUs. Specifically, machine learning (ML) algorithms can be trained to classify VRUs using the spectral content of radar signals. The performance of these models depends on the quality and quantity of the data used during the training process. Currently, data collection is typically done through measurements or low fidelity physics, primitive-based simulations. The feasibility of carrying out measurements to collect training data is typically limited by the vast amounts of data required and practicality issues when using VRUs like animals. In this paper, we present a computationally efficient, high fidelity physics-based simulation workflow that can be used to obtain a large quantity of spectrograms from the micro-Doppler signatures of VRUs. The simulations are conducted on full-scale VRU models with a 77 GHz, frequency-modulated continuouswave (FMCW) radar sensor model. Here, we collect the spectrograms of 4 targets; car, pedestrian, cyclist and dog at different speeds and angles-of-arrival. This data is then used to train a 5-layer convolutional neural network (CNN) that achieves nearly 100% classification accuracy after 5 epochs. Studies are conducted to investigate the impact of training data size, velocity and observation time window size on the accuracy of the CNN. Results from this study demonstrate how an accuracy of 95% can be realized using spectrograms obtained over a 0.2 s time window.
“…Finally, HFSS SBR+ also corrects the PO current truncation at shadow boundaries by including creeping wave (CW) physics. Therefore, using GO, PO, UTD, PTD and CW, high-fidelity physics based synthetic radar returns can be obtained [25], [26]. Using 8 T x elements with a spacing of 8λ and 16 R x elements with spacing of λ/2, a 128 virtual channel sensor was designed in SBR+.…”
Section: Validation Of Simulation Setup and Post Processing A Smentioning
Automotive radar is one of the enabling technologies for advanced driver assistance systems (ADAS) and subsequently fully autonomous vehicles. Along with determining the range and velocity of targets with fairly high resolution, autonomous vehicles navigating complex urban environments need radar sensors with high azimuth and elevation resolution. Size and cost constraints limit the physical number of antennas that can be used to achieve high resolution direction-of-arrival (DoA) estimation. Multipleinput/multiple-output (MIMO) schemes achieve larger virtual arrays using fewer physical antennas than would be needed for a single-input/multiple-output (SIMO) system. This paper presents a high-fidelity physics simulation of a 77GHz, frequency-modulated continuous-waveform (FMCW)-based 128 channel (8 transmitters (T x), 16 receivers (R x)) MIMO radar sensor. The 77GHz synthetic radar returns from full scale traffic scenes are obtained using a high-fidelity physics, shooting and bouncing ray electromagnetics solver. A fast Fourier transform (FFT) based signal processing scheme is used across slow-time (chirp) and space (channel) to obtain range-Doppler and DoA maps, respectively. Detection and angular separation performance comparisons of 16, 64 and 128 channel MIMO radar sensors are made for two complex driving scenarios.
“…Computer vision is a popular approach due to the low cost of cameras and the ability to classify the obstacles accurately (e.g., Mohamed et al, 2018;Janai et al, 2020). Machine learning approaches for environment abstraction are on the rise and appear promising (e.g., Yang et al, 2019;Fayyad et al, 2020;Sligar, 2020).…”
The article presents a review of recent literature on the performance metrics of Automated Driving Systems (ADS). More specifically, performance indicators of environment perception and motion planning modules are reviewed as they are the most complicated ADS modules. The need for the incorporation of the level of threat an obstacle poses in the performance metrics is described. A methodology to quantify the level of threat of an obstacle is presented in this regard. The approach involves simultaneously considering multiple stimulus parameters (that elicit responses from drivers), thereby not ignoring multivariate interactions. Human-likeness of ADS is a desirable characteristic as ADS share road infrastructure with humans. The described method can be used to develop human-like perception and motion planning modules of ADS. In this regard, performance metrics capable of quantifying human-likeness of ADS are also presented. A comparison of different performance metrics is then summarized. ADS operators have an obligation to report any incident (crash/disengagement) to safety regulating authorities. However, precrash events/states are not being reported. The need for the collection of the precrash scenario is described. A desirable modification to the data reporting/collecting is suggested as a framework. The framework describes the precrash sequences to be reported along with the possible ways of utilizing such a valuable dataset (by the safety regulating authorities) to comprehensively assess (and consequently improve) the safety of ADS. The framework proposes to collect and maintain a repository of precrash sequences. Such a repository can be used to 1) comprehensively learn and model the precrash scenarios, 2) learn the characteristics of precrash scenarios and eventually anticipate them, 3) assess the appropriateness of the different performance metrics in precrash scenarios, 4) synthesize a diverse dataset of precrash scenarios, 5) identify the ideal configuration of sensors and algorithms to enhance safety, and 6) monitor the performance of perception and motion planning modules.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.