Emotional mimicry is the imitation of the emotional expressions of others. According to the classic view on emotional mimicry (the Matched Motor Hypothesis), people mimic the specific facial movements that comprise a discrete emotional expression. However, little evidence exists for the mimicry of discrete emotions; rather, the extant evidence supports only valence-based mimicry. We propose an alternative Emotion Mimicry in Context view according to which emotional mimicry is not based on mere perception but rather on the interpretation of signals as emotional intentions in a specific context. We present evidence for the idea that people mimic contextualized emotions rather than simply expressive muscle movements. Our model postulates that (implicit or explicit) contextual information is needed for emotional mimicry to take place. It takes into account the relationship between observer and expresser, and suggests that emotional mimicry depends on this relationship and functions as a social regulator.
Recent application of theories of embodied or grounded cognition to the recognition and interpretation of facial expression of emotion has led to an explosion of research in psychology and the neurosciences. However, despite the accelerating number of reported findings, it remains unclear how the many component processes of emotion and their neural mechanisms actually support embodied simulation. Equally unclear is what triggers the use of embodied simulation versus perceptual or conceptual strategies in determining meaning. The present article integrates behavioral research from social psychology with recent research in neurosciences in order to provide coherence to the extant and future research on this topic. The roles of several of the brain's reward systems, and the amygdala, somatosensory cortices, and motor centers are examined. These are then linked to behavioral and brain research on facial mimicry and eye gaze. Articulation of the mediators and moderators of facial mimicry and gaze are particularly useful in guiding interpretation of relevant findings from neurosciences. Finally, a model of the processing of the smile, the most complex of the facial expressions, is presented as a means to illustrate how to advance the application of theories of embodied cognition in the study of facial expression of emotion.
Two studies provided direct support for a recently proposed dialect theory of communicating emotion, positing that expressive displays show cultural variations similar to linguistic dialects, thereby decreasing accurate recognition by out-group members. In Study 1, 60 participants from Quebec and Gabon posed facial expressions. Dialects, in the form of activating different muscles for the same expressions, emerged most clearly for serenity, shame, and contempt and also for anger, sadness, surprise, and happiness, but not for fear, disgust, or embarrassment. In Study 2, Quebecois and Gabonese participants judged these stimuli and stimuli standardized to erase cultural dialects. As predicted, an in-group advantage emerged for nonstandardized expressions only and most strongly for expressions with greater regional dialects, according to Study 1.
In the present research, we test the assumption that emotional mimicry and contagion are moderated by group membership. We report two studies using facial electromyography (EMG; Study 1), Facial Action Coding System (FACS; Study 2), and self-reported emotions (Study 2) as dependent measures. As predicted, both studies show that ingroup anger and fear displays were mimicked to a greater extent than outgroup displays of these emotions. The self-report data in Study 2 further showed specific divergent reactions to outgroup anger and fear displays. Outgroup anger evoked fear, and outgroup fear evoked aversion. Interestingly, mimicry increased liking for ingroup models but not for outgroup models. The findings are discussed in terms of the social functions of emotions in group contexts. (PsycINFO Database Record (c) 2011 APA, all rights reserved).
The present study concerned the influence of the presence of others on facial expressions of emotion. The proposition that facial expressive displays are better predicted by the social context than by emotional state (A. J. Fridlund, 1991) was tested in an experiment varying both the sociality of the context and the intensity of the emotion elicitor as well as the relationship between expressor and audience. The results indicate that the intensity of expressive displays cannot be satisfactorily predicted by either of these factors alone but is influenced by a complex interplay of all 3 factors.
The present research aimed to assess how people use knowledge about the emotional reactions of others to make inferences about their character. Specifically, we postulate that people can reconstruct or ''reverse engineer'' the appraisals underlying an emotional reaction and use this appraisal information to draw person perception inferences. As predicted, a person who reacted with anger to blame was perceived as more aggressive, and self-confident, but also as less warm and gentle than a person who reacted with sadness (Study 1). A person who reacted with a smile (Study 1) or remained neutral (Study 2) was perceived as self-confident but also as unemotional. These perceptions were mediated by perceived appraisals.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.