This paper presents a methodology to detect personality and basic emotion characteristics of crowds in video sequences. Firstly, individuals are detected and tracked, then groups are recognized and characterized. Such information is then mapped to OCEAN dimensions, used to find out personality and emotion in videos, based on OCC emotion models. Although it is a clear challenge to validate our results with real life experiments, we evaluate our method with the available literature information regarding OCEAN values of different Countries and also emergent Personal distance among people. Hence, such analysis refer to cultural differences of each country too. Our results indicate that this model generates coherent information when compared to data provided in available literature, as shown in qualitative and quantitative results. Keywords Computer vision • crowd features • Big-five model • cultural dimensions • crowd emotion Thanks to Office of Naval Research Global (USA) and Brazilian agencies: CAPES, CNPQ and FAPERGS.
We present a system to generate a procedural environment that produces a desired crowd behaviour. Instead of altering the behavioural parameters of the crowd itself, we automatically alter the environment to yield such desired crowd behaviour. This novel inverse approach is useful both to crowd simulation in virtual environments and to urban crowd planning applications. Our approach tightly integrates and extends a space discretization crowd simulator with inverse procedural modelling. We extend crowd simulation by goal exploration (i.e. agents are initially unaware of the goal locations), variable‐appealing sign usage and several acceleration schemes. We use Markov chain Monte Carlo to quickly explore the solution space and yield interactive design. We have applied our method to a variety of virtual and real‐world locations, yielding one order of magnitude faster crowd simulation performance over related methods and several fold improvement of crowd indicators.
This article proposes an embodied conversational agent named Arthur. In addition to being able to talk to a person (using text and voice), he is also able to recognize the person he is talking to and detect his/her expressed emotion through facial expressions. Arthur uses these skills to improve communication with the user, also using his artificial memory, which stores and retrieves data about events and facts, based on a human memory model. We conducted some experiments to collect quantitative and qualitative information, which show that our model provides a consistent impact on users.
This paper presents a study, organized in two phases, regarding group behavior in a controlled experiment focused on differences in an important attribute that vary across cultures—personal spaces. First, we want to study and compare the spatial behavior different populations adopt with respect to their personal space. Second, we want to use simulation of virtual agents to artificially generate movements of people in similar situations and validate them using real video sequences. Our main goal is to be able to extract from video sequences and then simulate variations in populations in a coherent way with literature that studies cultural aspects. In addition to the cultural aspects, we also investigate the personality model in the studied videos using OCEAN (Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism). Finally, we propose a way to simulate the fundamental diagram experiment from other countries using the OCEAN psychological trait model as input. Results indicate that the simulated countries have consistent characteristics with the expected literature.
This work aims to evaluate people's perception regarding geometric features, personalities and emotions characteristics in virtual humans. For this, we use as a basis, a dataset containing the tracking files of pedestrians captured from spontaneous videos and visualized them as identical virtual humans. The goal is to focus on their behavior and not being distracted by other features. In addition to tracking files containing their positions, the dataset also contains pedestrian emotions and personalities detected using Computer Vision and Pattern Recognition techniques. We proceed with our analysis in order to answer the question if subjects can perceive geometric features as distances/speeds as well as emotions and personalities in video sequences when pedestrians are represented by virtual humans. Regarding the participants, an amount of 73 people volunteered for the experiment. The analysis was divided in two parts: i) evaluation on perception of geometric characteristics, such as density, angular variation, distances and speeds, and ii) evaluation on personality and emotion perceptions. Results indicate that, even without explaining to the participants the concepts of each personality or emotion and how they were calculated (considering geometric characteristics), in most of the cases, participants perceived the personality and emotion expressed by the virtual agents, in accordance with the available ground truth.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.