Research on the perception of emotional expressions in faces and voices is exploding in psychology, the neurosciences, and affective computing. This article provides an overview of some of the major emotion expression (EE) corpora currently available for empirical research and introduces a new, dynamic, multimodal corpus of emotion expressions, the Geneva Multimodal Emotion Portrayals Core Set (GEMEP-CS). The design features of the corpus are outlined and justified, and detailed validation data for the core set selection are presented and discussed. Finally, an associated database with microcoded facial, vocal, and body action elements, as well as observer ratings, is introduced.
Emotion recognition ability has been identified as a central component of emotional competence. We describe the development of an instrument that objectively measures this ability on the basis of actor portrayals of dynamic expressions of 10 emotions (2 variants each for 5 emotion families), operationalized as recognition accuracy in 4 presentation modes combining the visual and auditory sense modalities (audio/video, audio only, video only, still picture). Data from a large validation study, including construct validation using related tests ( The results show the utility of a test designed to measure both coarse and fine-grained emotion differentiation and modality-specific skills. Factor analysis of the data suggests 2 separate abilities, visual and auditory recognition, which seem to be largely independent of personality dispositions.
The influence of emotions on intonation patterns (more specifically F0/pitch contours) is addressed in this article. A number of authors have claimed that specific intonation patterns reflect specific emotions, whereas others have found little evidence supporting this claim and argued that F0/pitch and other vocal aspects are continuously, rather than categorically, affected by emotions and/or emotional arousal. In this contribution, a new coding system for the assessment of F0 contours in emotion portrayals is presented. Results obtained for actor portrayed emotional expressions show that mean level and range of F0 in the contours vary strongly as a function of the degree of activation of the portrayed emotions. In contrast, there was comparatively little evidence for qualitatively different contour shapes for different emotions.
We tested Ekman's (2003) suggestion that movements of a small number of reliable facial muscles are particularly trustworthy cues to experienced emotion because they tend to be difficult to produce voluntarily. On the basis of theoretical predictions, we identified two subsets of facial action units (AUs): reliable AUs and versatile AUs. A survey on the controllability of facial AUs confirmed that reliable AUs indeed seem more difficult to control than versatile AUs, although the distinction between the two sets of AUs should be understood as a difference in degree of controllability rather than a discrete categorization. Professional actors enacted a series of emotional states using method acting techniques, and their facial expressions were rated by independent judges. The effect of the two subsets of AUs (reliable AUs and versatile AUs) on identification of the emotion conveyed, its perceived authenticity, and perceived intensity was investigated. Activation of the reliable AUs had a stronger effect than that of versatile AUs on the identification, perceived authenticity, and perceived intensity of the emotion expressed. We found little evidence, however, for specific links between individual AUs and particular emotion categories. We conclude that reliable AUs may indeed convey trustworthy information about emotional processes but that most of these AUs are likely to be shared by several emotions rather than providing information about specific emotions. This study also suggests that the issue of reliable facial muscles may generalize beyond the Duchenne smile.
Despite extensive research activity on the recognition of emotional expression, there are only few validated tests of individual differences in this competence (generally considered as part of nonverbal sensitivity and emotional intelligence). This paper reports the development of a short, multichannel, version (MiniPONS) of the established Profile of Nonverbal Sensitivity (PONS) test. The full test has been extensively validated in many different cultures, showing substantial correlations with a large range of outcome variables. The short multichannel version (64 items) described here correlates very highly with the full version and shows reasonable construct validity through significant correlations with other tests of emotion recognition ability. Based on these results, the role of nonverbal sensitivity as part of a latent trait of emotional competence is discussed and the MiniPONS is suggested as a convenient method to perform a rapid screening of this central socioemotional competence.
In an experimental study four levels of oculomotor load were induced binocularly. Trapezius muscle activity was measured with bipolar surface electromyography and normalized to a submaximal contraction. Twenty-eight subjects with a mean age of 29 (range 19-42, std 8) viewed a high-contrast fixation target for four 5-min periods through: (i) -3.5 dioptre (D) lenses; (ii) 0 D lenses; (iii) individually adjusted prism D lenses (1-2 D base out); and (iv) +3.5 D lenses. The target was placed close to the individual's age-appropriate near point of accommodation in conditions (i-iii) and at 3m in condition (iv). Each subject's ability to compensate for the added blur was extracted via infrared photorefraction measurements. A bitwise linear regression model was fitted on group level with eye-lens refraction on the x-axis and normalized trapezius muscle EMG (%RVE) on the y-axis. The model had a constant level of trapezius muscle activity--where subjects had not compensated for the incurred defocus by a change in eye-lens accommodation--and a slope, where the subjects had compensated. The slope coefficient was significantly positive in the -D (i) and the +D blur conditions (iv). During no blur (ii) and prism blur (iii) there were no signs of relationships. Nor was there any sign of relationship between the convergence response and trapezius muscle EMG in any of the experimental conditions. The results appear directly attributable to an engagement of the eye-lens accommodative system and most likely reflect sensorimotor processing along its reflex arc for the purpose of achieving stabilization of gaze.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.