Pedestrian crowds often have been modeled as many-particle system including microscopic multi-agent simulators. One of the key challenges is to unearth governing principles that can model pedestrian movement, and use them to reproduce paths and behaviors that are frequently observed in human crowds. To that effect, we present a novel crowd simulation algorithm that generates pedestrian trajectories that exhibit the speed-density relationships expressed by the Fundamental Diagram. Our approach is based on biomechanical principles and psychological factors. The overall formulation results in better utilization of free space by the pedestrians and can be easily combined with well-known multi-agent simulation techniques with little computational overhead. We are able to generate human-like dense crowd behaviors in large indoor and outdoor environments and validate the results with captured real-world crowd trajectories.
Current 3D capture and modeling technology can rapidly generate highly photorealistic 3D avatars of human subjects. However, while the avatars look like their human counterparts, their movements often do not mimic their own due to existing challenges in accurate motion capture and retargeting. A better understanding of factors that influence the perception of biological motion would be valuable for creating virtual avatars that capture the essence of their human subjects. To investigate these issues, we captured 22 subjects walking in an open space. We then performed a study where participants were asked to identify their own motion in varying visual representations and scenarios. Similarly, participants were asked to identify the motion of familiar individuals. Unlike prior studies that used captured footage with simple "point-light" displays, we rendered the motion on photo-realistic 3D virtual avatars of the subject. We found that self-recognition was significantly higher for virtual avatars than with point-light representations. Users were more confident of their responses when identifying their motion presented on their virtual avatar. Recognition rates varied considerably between motion types for recognition of others, but not for self-recognition. Overall, our results are consistent with previous studies that used recorded footage and offer key insights into the perception of motion rendered on virtual avatars.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.