Automotive radar is one of the enabling technologies for advanced driver assistance systems (ADAS) and subsequently fully autonomous vehicles. Along with determining the range and velocity of targets with fairly high resolution, autonomous vehicles navigating complex urban environments need radar sensors with high azimuth and elevation resolution. Size and cost constraints limit the physical number of antennas that can be used to achieve high resolution direction-of-arrival (DoA) estimation. Multipleinput/multiple-output (MIMO) schemes achieve larger virtual arrays using fewer physical antennas than would be needed for a single-input/multiple-output (SIMO) system. This paper presents a high-fidelity physics simulation of a 77GHz, frequency-modulated continuous-waveform (FMCW)-based 128 channel (8 transmitters (T x), 16 receivers (R x)) MIMO radar sensor. The 77GHz synthetic radar returns from full scale traffic scenes are obtained using a high-fidelity physics, shooting and bouncing ray electromagnetics solver. A fast Fourier transform (FFT) based signal processing scheme is used across slow-time (chirp) and space (channel) to obtain range-Doppler and DoA maps, respectively. Detection and angular separation performance comparisons of 16, 64 and 128 channel MIMO radar sensors are made for two complex driving scenarios.
Safety critical systems in Advanced Driver Assistance Systems (ADAS) depend on multiple sensors to perceive the environment in which they operate. Radar sensors provide many advantages and complementary capabilities to other available sensors but are not without their own shortcomings. Performance of radar perception algorithms still pose many challenges, one of which is in object detection and classification. In order to increase redundancy in ADAS, the ability for a radar system to detect and classify objects independent of other sensors is desirable. In this paper, an investigation of a machine learning based radar perception algorithm for object detection is implemented, along with a novel, automated workflow for generating large-scale virtual datasets used for training and testing. Physics-based electromagnetic simulation of a complex scattering environment is used to create the virtual dataset. Objects are classified and localized within Doppler-Range results using a single channel 77 GHz FMCW radar system. Utilizing a fully convolutional network, the radar perception model is trained and tested. The training is performed using a wide range of environments and traffic scenarios. Model inference is tested on completely new environments and traffic scenarios. These simulated radar returns are highly scalable and offer an efficient method for dataset generation. These virtual datasets facilitate a simple method of introducing variability in training data, corner case evaluation and root cause analysis, amongst other advantages.
Detection and classification of vulnerable road users (VRUs) such as pedestrians and cyclists is a key requirement for the realization of fully autonomous vehicles. Radar-based classification of VRUs can be achieved by exploiting differences in the micro-Doppler signatures associated with VRUs. Specifically, machine learning (ML) algorithms can be trained to classify VRUs using the spectral content of radar signals. The performance of these models depends on the quality and quantity of the data used during the training process. Currently, data collection is typically done through measurements or low fidelity physics, primitive-based simulations. The feasibility of carrying out measurements to collect training data is typically limited by the vast amounts of data required and practicality issues when using VRUs like animals. In this paper, we present a computationally efficient, high fidelity physics-based simulation workflow that can be used to obtain a large quantity of spectrograms from the micro-Doppler signatures of VRUs. The simulations are conducted on full-scale VRU models with a 77 GHz, frequency-modulated continuouswave (FMCW) radar sensor model. Here, we collect the spectrograms of 4 targets; car, pedestrian, cyclist and dog at different speeds and angles-of-arrival. This data is then used to train a 5-layer convolutional neural network (CNN) that achieves nearly 100% classification accuracy after 5 epochs. Studies are conducted to investigate the impact of training data size, velocity and observation time window size on the accuracy of the CNN. Results from this study demonstrate how an accuracy of 95% can be realized using spectrograms obtained over a 0.2 s time window.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.