Pedestrian and crowd simulation is generally focused on operational level decisions, providing the choice of exact steps of pedestrians in a representation of the environment, with the aim of replicating observed patterns of space utilization, trajectories and timings. When relatively large environments are considered, though, tactical level decisions become equally important: in general, multiple paths can be followed to reach a target from an entrance or starting point, and path length might not be the only reasonable criterion. This paper presents a hybrid agent architecture for modeling different types of decisions in a pedestrian simulation system, encompassing a floor-field based operational level (based on a "least effort" principle) and an adaptive tactical level component, provided with a graph-like representation of the envornment, considering both perceived congestion and characteristics of potential paths in the related decision. The model is experimented and evaluated both qualitatively and quantitatively in benchmark scenarios to show its adequacy and expressiveness.
In this paper, we introduce the notion of Cooperative Perception Error Models (coPEMs) towards achieving an effective and efficient integration of V2X solutions within a virtual test environment. We focus our analysis on the occlusion problem in the (onboard) perception of Autonomous Vehicles (AV), which can manifest as misdetection errors on the occluded objects. Cooperative perception (CP) solutions based on Vehicle-to-Everything (V2X) communications aim to avoid such issues by cooperatively leveraging additional points of view for the world around the AV. This approach usually requires many sensors, mainly cameras and LiDARs, to be deployed simultaneously in the environment either as part of the road infrastructure or on other traffic vehicles. However, implementing a large number of sensor models in a virtual simulation pipeline is often prohibitively computationally expensive. Therefore, in this paper, we rely on extending Perception Error Models (PEMs) to efficiently implement such cooperative perception solutions along with the errors and uncertainties associated with them. We demonstrate the approach by comparing the safety achievable by an AV challenged with a traffic scenario where occlusion is the primary cause of a potential collision.
Automotive perception involves understanding the external driving environment and the internal state of the vehicle cabin and occupants using sensor data. It is critical to achieving high levels of safety and autonomy in driving. This article provides an overview of different sensor modalities, such as cameras, radars, and light detection and ranging (LiDAR) used commonly for perception, along with the associated data processing techniques. Critical aspects of perception are considered, such as architectures for processing data from single or multiple sensor modalities, sensor data processing algorithms and the role of machine learning techniques, methodologies for validating the performance of perception systems, and safety. The technical challenges for each aspect are analyzed, emphasizing machine learning approaches, given their potential impact on improving perception. Finally, future research opportunities in automotive perception for their wider deployment are outlined.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.