The virtual testing and validation of advanced driver assistance system and automated driving (ADAS/AD) functions require efficient and realistic perception sensor models. In particular, the limitations and measurement errors of real perception sensors need to be simulated realistically in order to generate useful sensor data for the ADAS/AD function under test. In this paper, a novel sensor modeling approach for automotive perception sensors is introduced. The novel approach combines kernel density estimation with regression modeling and puts the main focus on the position measurement errors. The modeling approach is designed for any automotive perception sensor that provides position estimations at the object level. To demonstrate and evaluate the new approach, a common state-of-the-art automotive camera (Mobileye 630) was considered. Both sensor measurements (Mobileye position estimations) and ground-truth data (DGPS positions of all attending vehicles) were collected during a large measurement campaign on a Hungarian highway to support the development and experimental validation of the new approach. The quality of the model was tested and compared to reference measurements, leading to a pointwise position error of 9.60% in the lateral and 1.57% in the longitudinal direction. Additionally, the modeling of the natural scattering of the sensor model output was satisfying. In particular, the deviations of the position measurements were well modeled with this approach.
New advanced driver assistance system/automated driving (ADAS/AD) functions have the potential to significantly enhance the safety of vehicle passengers and road users, while also enabling new transportation applications and potentially reducing CO2 emissions. To achieve the next level of driving automation, i.e., SAE Level-3, physical test drives need to be supplemented by simulations in virtual test environments. A major challenge for today's virtual test environments is to provide a realistic representation of the vehicle's perception system (camera, lidar, radar). Therefore, new and improved sensor models are required to perform representative virtual tests that can supplement physical test drives. In this article, we present a computationally efficient, mathematically complete, and geometrically exact generic sensor modeling approach that solves the FOV (field of view) and occlusion task. We also discuss potential extensions, such as bounding-box cropping and sensor-specific, weather-dependent FOV-reduction approaches for camera, lidar, and radar. The performance of the new modeling approach is demonstrated using camera measurements from a test campaign conducted in Hungary in 2020 plus three artificial scenarios (a multi-target scenario with an adjacent truck occluding other road users and two traffic jam situations in which the ego vehicle is either a car or a truck). These scenarios are benchmarked against existing sensor modeling approaches that only exclude objects that are outside the sensor's maximum detection range or angle. The modeling approach presented can be used as is or provide the basis for a more complex sensor model, as it reduces the number of potentially detectable targets and therefore improves the performance of subsequent simulation steps.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.