During communication, we perceive and express emotional information through many different channels, including facial expressions, prosody, body motion, and posture. Although historically the human body has been perceived primarily as a tool for actions, there is now increased understanding that the body is also an important medium for emotional expression. Indeed, research on emotional body language is rapidly emerging as a new field in cognitive and affective neuroscience. This article reviews how whole-body signals are processed and understood, at the behavioral and neural levels, with specific reference to their role in emotional communication. The first part of this review outlines brain regions and spectrotemporal dynamics underlying perception of isolated neutral and affective bodies, the second part details the contextual effects on body emotion recognition, and final part discusses body processing on a subconscious level. More specifically, research has shown that body expressions as compared with neutral bodies draw upon a larger network of regions responsible for action observation and preparation, emotion processing, body processing, and integrative processes. Results from neurotypical populations and masking paradigms suggest that subconscious processing of affective bodies relies on a specific subset of these regions. Moreover, recent evidence has shown that emotional information from the face, voice, and body all interact, with body motion and posture often highlighting and intensifying the emotion expressed in the face and voice.
Social aggression, such as domestic violence, has been associated with a reduced ability to take on others’ perspectives. In this naturalistic imaging study, we investigated whether training human participants to take on a first-person embodied perspective during the experience of domestic violence enhances the identification with the victim and elicits brain activity associated with the monitoring of the body and surrounding space and the experience of threat. We combined fMRI measurements with preceding virtual reality exposure from either first-person perspective (1PP) or third-person perspective (3PP) to manipulate whether the domestic abuse stimulus was perceived as directed to oneself or another. We found that 1PP exposure increased body ownership and identification with the virtual victim. Furthermore, when the stimulus was perceived as directed toward oneself, the brain network that encodes the bodily self and its surrounding space was more strongly synchronized across participants and connectivity increased from premotor cortex (PM) and intraparietal sulcus towards superior parietal lobe. Additionally, when the stimulus came near the body, brain activity in the amygdala (AMG) strongly synchronized across participants. Exposure to 3PP reduced synchronization of brain activity in the personal space network, increased modulation of visual areas and strengthened functional connectivity between PM, supramarginal gyrus and primary visual cortex. In conclusion, our results suggest that 1PP embodiment training enhances experience from the viewpoint of the virtual victim, which is accompanied by synchronization in the fronto-parietal network to predict actions toward the body and in the AMG to signal the proximity of the stimulus.
Virtual reality (VR) promises methodological rigour with the extra benefit of allowing us to study the context-dependent behaviour of individuals in their natural environment. Pan and Hamilton (2018, Br. J. Psychol.) provide a useful overview of methodological recommendations for using VR. Here, we highlight some other aspects of the use of VR. Our first argument is that VR can be useful by virtue of its differences from the normal perceptual environment. That is, by virtue of its relative non-realism and poverty of its perceptual elements, it can actually offer increased clarity with respect to the features of interest for the researcher. Our second argument is that VR exerts its measurable influence more by eliciting an acceptance of the virtual world (i.e., 'suspension of disbelief') rather than by eliciting a true belief of the realism of the VR environment. We conclude by providing a novel suggestion for combining neuroimaging methods with embodied VR that relies on the suspension of disbelief.
Previous studies have shown that the early visual cortex contains content-specific representations of stimuli during visual imagery, and that these representational patterns of imagery content have a perceptual basis. To date, there is little evidence for the presence of a similar organization in the auditory and tactile domains. Using fMRI-based multivariate pattern analyses we showed that primary somatosensory, auditory, motor, and visual cortices are discriminative for imagery of touch versus sound. In the somatosensory, motor and visual cortices the imagery modality discriminative patterns were similar to perception modality discriminative patterns, suggesting that top-down modulations in these regions rely on similar neural representations as bottom-up perceptual processes. Moreover, we found evidence for content-specific representations of the stimuli during auditory imagery in the primary somatosensory and primary motor cortices. Both the imagined emotions and the imagined identities of the auditory stimuli could be successfully classified in these regions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.