Abstract-This paper presents a framework for analysis of affective behavior starting with a reduced amount of visual information related to human upper-body movements. The main goal is to individuate a minimal representation of emotional displays based on nonverbal gesture features. The GEMEP (Geneva multimodal emotion portrayals) corpus was used to validate this framework. Twelve emotions expressed by 10 actors form the selected data set of emotion portrayals. Visual tracking of trajectories of head and hands were performed from a frontal and a lateral view. Postural/shape and dynamic expressive gesture features were identified and analyzed. A feature reduction procedure was carried out, resulting in a 4D model of emotion expression that effectively classified/ grouped emotions according to their valence (positive, negative) and arousal (high, low). These results show that emotionally relevant information can be detected/measured/obtained from the dynamic qualities of gesture. The framework was implemented as software modules (plug-ins) extending the EyesWeb XMI Expressive Gesture Processing Library and is going to be used in user centric, networked media applications, including future mobiles, characterized by low computational resources, and limited sensor systems.
When people perform a task as part of a joint action, their behavior is not the same as it would be if they were performing the same task alone, since it has to be adapted to facilitate shared understanding (or sometimes to prevent it). Joint performance of music offers a test bed for ecologically valid investigations of the way non-verbal behavior facilitates joint action. Here we compare the expressive movement of violinists when playing in solo and ensemble conditions. The first violinists of two string quartets (SQs), professional and student, were asked to play the same musical fragments in a solo condition and with the quartet. Synchronized multimodal recordings were created from the performances, using a specially developed software platform. Different patterns of head movement were observed. By quantifying them using an appropriate measure of entropy, we showed that head movements are more predictable in the quartet scenario. Rater evaluations showed that the change does not, as might be assumed, entail markedly reduced expression. They showed some ability to discriminate between solo and ensemble performances, but did not distinguish them in terms of emotional content or expressiveness. The data raise provocative questions about joint action in realistically complex scenarios.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.