Sensorimotor brain areas have been implicated in the recognition of emotion expressed on the face and through non-verbal vocalizations. However, no previous study has assessed whether sensorimotor cortices are recruited during the perception of emotion in speech, a signal that includes both audio (speech sounds) and visual (facial speech movements) components. To address this gap in the literature, we recruited 24 participants to listen to speech clips expressed in a way that was either a happy, sad, or neutral. These stimuli were also presented in one of three modalities: audio-only (hearing the voice but not seeing the face), video-only (seeing the face but not hearing the voice), or audiovisual. Brain activity was recorded using electroencephalography, subjected to independent component analysis, and source-localized. We found that the left pre-supplementary motor area was more active in response to happy and sad stimuli than neutral stimuli, as indexed by greater mu event-related desynchronization. This effect did not differ by the sensory modality of the stimuli. Activity levels in other sensorimotor brain areas did not differ by emotion, although they were greatest in response to visual-only and audiovisual stimuli. One possible explanation for the pre-SMA result is that this brain area may actively support speech emotion recognition by using our extensive experience expressing emotion to generate sensory predictions that in turn guide our perception.
This study aims to clarify unresolved questions from two earlier studies (McGarry et al., 2012; Kaplan & Iacoboni, 2007) on human mirror neuron system (hMNS) responsivity to multimodal presentations of actions. These questions are: (1) whether the two frontal areas originally identified by Kaplan and Iacoboni (ventral premotor cortex [vPMC] and inferior frontal gyrus [IFG]) are both part of the hMNS (i.e., do they respond to execution as well as observation), (2) whether both areas yield effects of biologicalness (biological, control) and modality (audio, visual, audiovisual), and (3) whether the vPMC is preferentially responsive to multimodal input. To resolve these questions about the hMNS, we replicated and extended McGarry et al.’s electroencephalography (EEG) study, while incorporating advanced source localization methods. Participants were asked to execute movements (ripping paper) as well as observe those movements across the same three modalities (audio, visual, and audiovisual), all while 64-channel EEG data was recorded. Two frontal sources consistent with those identified in prior studies showed mu event-related desynchronization (mu-ERD) under execution and observation conditions. These sources also showed a greater response to biological movement than to control stimuli as well as a distinct visual advantage, with greater responsivity to visual and audiovisual compared to audio conditions. Exploratory analyses of mu-ERD in the vPMC under visual and audiovisual observation conditions suggests that the hMNS tracks the magnitude of visual movement over time.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.