Emotional facial expressions play a critical role in theories of emotion and figure prominently in research on almost every aspect of emotion. This article provides a background for a new database of basic emotional expressions. The goal in creating this set was to provide high quality photographs of genuine facial expressions. Thus, after proper training, participants were inclined to express “felt” emotions. The novel approach taken in this study was also used to establish whether a given expression was perceived as intended by untrained judges. The judgment task for perceivers was designed to be sensitive to subtle changes in meaning caused by the way an emotional display was evoked and expressed. Consequently, this allowed us to measure the purity and intensity of emotional displays, which are parameters that validation methods used by other researchers do not capture. The final set is comprised of those pictures that received the highest recognition marks (e.g., accuracy with intended display) from independent judges, totaling 210 high quality photographs of 30 individuals. Descriptions of the accuracy, intensity, and purity of displayed emotion as well as FACS AU's codes are provided for each picture. Given the unique methodology applied to gathering and validating this set of pictures, it may be a useful tool for research using face stimuli. The Warsaw Set of Emotional Facial Expression Pictures (WSEFEP) is freely accessible to the scientific community for non-commercial use by request at http://www.emotional-face.org.
Facial mimicry has long been considered a main mechanism underlying emotional contagion (i.e. the transfer of emotions between people). A closer look at the empirical evidence, however, reveals that although these two phenomena often co-occur, the changes in emotional expressions may not necessarily be causally linked to the changes in subjective emotional experience. Here, we directly investigate this link, by testing a model in which facial activity served as a mediator between the observed emotional displays and subsequently felt emotions (i.e. emotional contagion). Participants watched videos of different senders displaying happiness, anger, or sadness, while their facial activity was recorded. After each video, participants rated their own emotions and assessed the senders' likeability and competence. Participants both mimicked and reported feeling the emotions displayed by the senders. Moreover, their facial activity partially explained the association between the senders' emotional displays and self-reported emotions, thereby supporting the notion that facial mimicry may be involved in emotional contagion.
Facial features influence social evaluations. For example, faces are rated as more attractive and trustworthy when they have more smiling features and also more female features. However, the influence of facial features on evaluations should be qualified by the affective consequences of fluency (cognitive ease) with which such features are processed. Further, fluency (along with its affective consequences) should depend on whether the current task highlights conflict between specific features. Four experiments are presented. In 3 experiments, participants saw faces varying in expressions ranging from pure anger, through mixed expression, to pure happiness. Perceivers first categorized faces either on a control dimension, or an emotional dimension (angry/happy). Thus, the emotional categorization task made "pure" expressions fluent and "mixed" expressions disfluent. Next, participants made social evaluations. Results show that after emotional categorization, but not control categorization, targets with mixed expressions are relatively devalued. Further, this effect is mediated by categorization disfluency. Additional data from facial electromyography reveal that on a basic physiological level, affective devaluation of mixed expressions is driven by their objective ambiguity. The fourth experiment shows that the relative devaluation of mixed faces that vary in gender ambiguity requires a gender categorization task. Overall, these studies highlight that the impact of facial features on evaluation is qualified by their fluency, and that the fluency of features is a function of the current task. The discussion highlights the implications of these findings for research on emotional reactions to ambiguity.
Facial features that resemble emotional expressions influence key social evaluations, including trust. Here, we present four experiments testing how the impact of such expressive features is qualified by their processing difficulty. We show that faces with mixed expressive features are relatively devalued, and faces with pure expressive features are relatively valued. This is especially true when participants first engage in a categorisation task that makes processing of mixed expressions difficult and pure expressions easy. Critically, we also demonstrate that the impact of categorisation fluency depends on the specific nature of the expressive features. When faces vary on valence (i.e. sad to happy), trust judgments increase with their positivity, but also depend on fluency. When faces vary on social motivation (i.e. angry to sad), trust judgments increase with their approachability, but remain impervious to disfluency. This suggests that people intelligently use fluency to make judgments on valence-relevant judgment dimensions - but not when faces can be judged using other relevant criteria, such as motivation. Overall, the findings highlight that key social impressions (like trust) are flexibly constructed from inputs related to stimulus features and processing experience.
Social interactions require quick perception, interpretation, and categorization of faces, with facial features offering cues to emotions, intentions, and traits. Importantly, reactions to faces depend not only on their features but also on their processing fluency, with disfluent faces suffering social devaluation. The current research used electrophysiological (EEG) and behavioral measures to explore at what processing stage and under what conditions emotional ambiguity is detected in the brain and how it influences trustworthiness judgments. Participants viewed male and female faces ranging from pure anger, through mixed expressions, to pure happiness. They categorized each face along the experimental dimension (happy vs. angry) or a control dimension (gender). In the emotion-categorization condition, mixed (ambiguous) expressions were classified relatively slower, and their trustworthiness was rated relatively lower. EEG analyses revealed that early brain responses are independent of the categorization condition, with pure faces evoking larger P1/N1 responses than mixed expressions. Some late (728-880 ms) brain responses from central-parietal sites also were independent of the categorization condition and presumably reflect familiarity of the emotion categories, with pure expressions evoking larger central-parietal LPP amplitude than mixed expressions. Interestingly, other late responses were sensitive to both expressive features and categorization task, with ambiguous faces evoking a larger LPP amplitude in frontal-medial sites around 560-660 ms but only in the emotion categorization task. Critically, these late responses from the frontal-medial cluster correlated with the reduction in trustworthiness judgments. Overall, the results suggest that ambiguity detection involves late, top-down processes and that it influences important social impressions.
Many studies have explored the evaluative effects of vertical (up/down) or horizontal (left/right) spatial locations. However, little is known about the role of information that comes from the front and back. Basing our investigations on multiple theoretical considerations, we propose that spatial location of sounds is a cue for message valence, such that a message coming from behind is interpreted as more negative than a message presented in front of a listener. Here we show across a variety of manipulations and dependent measures that this effect occurs in the domain of social information. Our data are most compatible with theoretical accounts which propose that social information presented from behind is associated with uncertainty and lack of control, which is amplified in conditions of self-relevance. K E Y W O R D Salarm theory, rear negativity, sound, spatial location
The enhancement hypothesis suggests that deaf individuals are more vigilant to visual emotional cues than hearing individuals. The present eye-tracking study examined ambient–focal visual attention when encoding affect from dynamically changing emotional facial expressions. Deaf (n = 17) and hearing (n = 17) individuals watched emotional facial expressions that in 10-s animations morphed from a neutral expression to one of happiness, sadness, or anger. The task was to recognize emotion as quickly as possible. Deaf participants tended to be faster than hearing participants in affect recognition, but the groups did not differ in accuracy. In general, happy faces were more accurately and more quickly recognized than faces expressing anger or sadness. Both groups demonstrated longer average fixation duration when recognizing happiness in comparison to anger and sadness. Deaf individuals directed their first fixations less often to the mouth region than the hearing group. During the last stages of emotion recognition, deaf participants exhibited more focal viewing of happy faces than negative faces. This pattern was not observed among hearing individuals. The analysis of visual gaze dynamics, switching between ambient and focal attention, was useful in studying the depth of cognitive processing of emotional information among deaf and hearing individuals.
Three studies investigated the effects of two fundamental dimensions of social perception on emotional contagion (i.e., the transfer of emotions between people). Rooting our hypotheses in the Dual Perspective Model of Agency and Communion (Abele and Wojciszke in Adv Exp Soc Psychol 50:198–255, 10.1016/B978-0-12-800284-1.00004-7, 2014), we predicted that agency would strengthen the effects of communion on emotional contagion and emotional mimicry (a process often considered a key mechanism behind emotional contagion). To test this hypothesis, we exposed participants to happy, sad, and angry senders characterized by low vs. high communion and agency. Our results demonstrated that, as expected, the effects of the two dimensions on socially induced emotions were interactive. The strength and direction of these effects, however, were consistent with our predictions only when the senders expressed happiness. When the senders expressed sadness, we found no effects of agency or communion on participants’ emotional responses, whereas for anger a mixed pattern emerged. Overall, our results align with the notion that emotional contagion and mimicry are modulated not only by the senders’ traits but also by the social meaning of the expressed emotion.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.