Research on emotion recognition has been dominated by studies of photographs of facial expressions. A full understanding of emotion perception and its neural substrate will require investigations that employ dynamic displays and means of expression other than the face. Our aims were: (i) to develop a set of dynamic and static whole-body expressions of basic emotions for systematic investigations of clinical populations, and for use in functional-imaging studies; (ii) to assess forced-choice emotion-classification performance with these stimuli relative to the results of previous studies; and (iii) to test the hypotheses that more exaggerated whole-body movements would produce (a) more accurate emotion classification and (b) higher ratings of emotional intensity. Ten actors portrayed 5 emotions (anger, disgust, fear, happiness, and sadness) at 3 levels of exaggeration, with their faces covered. Two identical sets of 150 emotion portrayals (full-light and point-light) were created from the same digital footage, along with corresponding static images of the 'peak' of each emotion portrayal. Recognition tasks confirmed previous findings that basic emotions are readily identifiable from body movements, even when static form information is minimised by use of point-light displays, and that full-light and even point-light displays can convey identifiable emotions, though rather less efficiently than dynamic displays. Recognition success differed for individual emotions, corroborating earlier results about the importance of distinguishing differences in movement characteristics for different emotional expressions. The patterns of misclassifications were in keeping with earlier findings on emotional clustering. Exaggeration of body movement (a) enhanced recognition accuracy, especially for the dynamic point-light displays, but notably not for sadness, and (b) produced higher emotional-intensity ratings, regardless of lighting condition, for movies but to a lesser extent for stills, indicating that intensity judgments of body gestures rely more on movement (or form-from-movement) than static form information.
Basic emotional states (such as anger, fear, and joy) can be similarly conveyed by the face, the body, and the voice. Are there human brain regions that represent these emotional mental states regardless of the sensory cues from which they are perceived? To address this question, in the present study participants evaluated the intensity of emotions perceived from face movements, body movements, or vocal intonations, while their brain activity was measured with functional magnetic resonance imaging (fMRI). Using multivoxel pattern analysis, we compared the similarity of response patterns across modalities to test for brain regions in which emotion-specific patterns in one modality (e.g., faces) could predict emotion-specific patterns in another modality (e.g., bodies). A whole-brain searchlight analysis revealed modality-independent but emotion category-specific activity patterns in medial prefrontal cortex (MPFC) and left superior temporal sulcus (STS). Multivoxel patterns in these regions contained information about the category of the perceived emotions (anger, disgust, fear, happiness, sadness) across all modality comparisons (face-body, face-voice, body-voice), and independently of the perceived intensity of the emotions. No systematic emotion-related differences were observed in the overall amplitude of activation in MPFC or STS. These results reveal supramodal representations of emotions in high-level brain areas previously implicated in affective processing, mental state attribution, and theory-of-mind. We suggest that MPFC and STS represent perceived emotions at an abstract, modality-independent level, and thus play a key role in the understanding and categorization of others' emotional mental states.
The full-text may be used and/or reproduced, and given to third parties in any format or medium, without prior permission or charge, for personal research or study, educational, or not-for-pro t purposes provided that:• a full bibliographic reference is made to the original source • a link is made to the metadata record in DRO • the full-text is not changed in any way The full-text must not be sold in any format or medium without the formal permission of the copyright holders.Please consult the full DRO policy for further details. Background. Previous behavioural and neuroimaging studies of emotion processing in autistic spectrum disorder (ASD) have focused on the use of facial stimuli. To date, however, no studies have examined emotion processing in autism across a broad range of social signals.Method. This study addressed this issue by investigating emotion processing in a group of 23 adults with ASD and 23 age-and gender-matched controls. Recognition of basic emotions (' happiness ', ' sadness ', ' anger ', disgust' and ' fear ') was assessed from facial, body movement and vocal stimuli. The ability to make social judgements (such as approachability) from facial stimuli was also investigated.Results. Significant deficits in emotion recognition were found in the ASD group relative to the control group across all stimulus domains (faces, body movements and voices). These deficits were seen across a range of emotions. The ASD group were also impaired in making social judgements compared to the control group and this correlated with impairments in basic emotion recognition.Conclusions. This study demonstrates that there are significant and broad-ranging deficits in emotion processing in ASD present across a range of stimulus domains and in the auditory and visual modality ; they cannot therefore be accounted for simply in terms of impairments in face processing or in the visual modality alone. These results identify a core deficit affecting the processing of a wide range of emotional information in ASD, which contributes to the impairments in social function seen in people with this condition. IntroductionAutism, as defined by DSM-IV criteria, is a developmental disorder characterized by difficulties in social interaction, a restricted repetitive range of interests and behaviours and impairments in verbal and nonverbal communication. There is a broad clinical phenotype that encompasses a wide range of behaviour and degrees of global intellectual impairment. This results in a diverse clinical population, generally described as having an autism spectrum disorder (ASD). Individuals on the autism spectrum who do not show global intellectual impairment are commonly referred to as having high-functioning autism (HFA) if they have a history of significant language delay and Asperger syndrome (AS) if they do not. For adults with HFA/AS it is the difficulties in social communication and interaction that are frequently the most debilitating.Studies have identified deficits in facial emotion recognition in both children (C...
Seven experiments investigated the finding that threatening schematic faces are detected more quickly than nonthreatening faces. Threatening faces with v-shaped eyebrows (angry and scheming expressions) were detected more quickly than nonthreatening faces with inverted v-shaped eyebrows (happy and sad expressions). In contrast to the hypothesis that these effects were due to perceptual features unrelated to the face, no advantage was found for v-shaped eyebrows presented in a nonfacelike object. Furthermore, the addition of internal facial features (the eyes, or the nose and mouth) was necessary to produce the detection advantage for faces with v-shaped eyebrows. Overall, the results are interpreted as showing that the v-shaped eyebrow configuration affords easy detection, but only when other internal facial features are present.
Publisher's copyright statement:Additional information: Use policyThe full-text may be used and/or reproduced, and given to third parties in any format or medium, without prior permission or charge, for personal research or study, educational, or not-for-prot purposes provided that:• a full bibliographic reference is made to the original source • a link is made to the metadata record in DRO • the full-text is not changed in any way The full-text must not be sold in any format or medium without the formal permission of the copyright holders.Please consult the full DRO policy for further details.
The full-text may be used and/or reproduced, and given to third parties in any format or medium, without prior permission or charge, for personal research or study, educational, or not-for-prot purposes provided that:• a full bibliographic reference is made to the original source • a link is made to the metadata record in DRO • the full-text is not changed in any way The full-text must not be sold in any format or medium without the formal permission of the copyright holders.Please consult the full DRO policy for further details.
Emotionally expressive faces have been shown to modulate activation in visual cortex, including face-selective regions in ventral temporal lobe. Here, we tested whether emotionally expressive bodies similarly modulate activation in body-selective regions. We show that dynamic displays of bodies with various emotional expressions vs neutral bodies, produce significant activation in two distinct body-selective visual areas, the extrastriate body area and the fusiform body area. Multi-voxel pattern analysis showed that the strength of this emotional modulation was related, on a voxel-by-voxel basis, to the degree of body selectivity, while there was no relation with the degree of selectivity for faces. Across subjects, amygdala responses to emotional bodies positively correlated with the modulation of body-selective areas. Together, these results suggest that emotional cues from body movements produce topographically selective influences on category-specific populations of neurons in visual cortex, and these increases may implicate discrete modulatory projections from the amygdala.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.