a b s t r a c tThe voice is a rich source of information, which the human brain has evolved to decode and interpret. Empirical observations have shown that the human auditory system is especially sensitive to the human voice, and that activity within the voice-sensitive regions of the primary and secondary auditory cortex is modulated by the emotional quality of the vocal signal, and may therefore subserve, with frontal regions, the cognitive ability to correctly identify the speaker's affective state. So far, the network involved in the processing of vocal affect has been mainly characterised at the cortical level. However, anatomical and functional evidence suggests that acoustic information relevant to the affective quality of the auditory signal might be processed prior to the auditory cortex.Here we review the animal and human literature on the main subcortical structures along the auditory pathway, and propose a model whereby the distinction between different types of vocal affect in auditory communication begins at very early stages of auditory processing, and relies on the analysis of individual acoustic features of the sound signal. We further suggest that this early feature-based decoding occurs at a subcortical level along the ascending auditory pathway, and provides a preliminary coarse (but fast) characterisation of the affective quality of the auditory signal before the more refined (but slower) cortical processing is completed.
Detecting regularity and change in the environment is crucial for survival, as it enables making predictions about the world and informing goal-directed behavior. In the auditory modality, the detection of regularity involves segregating incoming sounds into distinct perceptual objects (stream segregation). The detection of change from this within-stream regularity is associated with the mismatch negativity, a component of auditory event-related brain potentials (ERPs). A central unanswered question is how the detection of regularity and the detection of change are interrelated, and whether attention affects the former, the latter, or both. Here we show that the detection of regularity and the detection of change can be empirically dissociated, and that attention modulates the detection of change without precluding the detection of regularity, and the perceptual organization of the auditory background into distinct streams. By applying frequency spectra analysis on the EEG of subjects engaged in a selective listening task, we found distinct peaks of ERP synchronization, corresponding to the rhythm of the frequency streams, independently of whether the stream was attended or ignored. Our results provide direct neurophysiological evidence of regularity detection in the auditory background, and show that it can occur independently of change detection and in the absence of attention.
Amygdala Temporal voice area Auditory emotions fMRI a b s t r a c tDiscriminating between auditory signals of different affective value is critical to successful social interaction. It is commonly held that acoustic decoding of such signals occurs in the auditory system, whereas affective decoding occurs in the amygdala. However, given that the amygdala receives direct subcortical projections that bypass the auditory cortex, it is possible that some acoustic decoding occurs in the amygdala as well, when the acoustic features are relevant for affective discrimination. We tested this hypothesis by combining functional neuroimaging with the neurophysiological phenomena of repetition suppression (RS) and repetition enhancement (RE) in human listeners. Our results show that both amygdala and auditory cortex responded differentially to physical voice features, suggesting that the amygdala and auditory cortex decode the affective quality of the voice not only by processing the emotional content from previously processed acoustic features, but also by processing the acoustic features themselves, when these are relevant to the identification of the voice's affective value. Specifically, we found that the auditory cortex is sensitive to spectral high-frequency voice cues when discriminating vocal anger from vocal fear and joy, whereas the amygdala is sensitive to vocal pitch when discriminating between negative vocal emotions (i.e., anger and fear). Vocal pitch is an instantaneously recognized voice feature, which is potentially transferred to the amygdala by direct subcortical projections. These results together provide evidence that, besides the auditory cortex, the amygdala too processes acoustic information, when this is relevant to the discrimination of auditory emotions.© 2016 Elsevier Ltd. All rights reserved. Available online at www.sciencedirect.com ScienceDirectJournal homepage: www.elsevier.com/locate/cortex c o r t e x 8 5 ( 2 0 1 6 ) 1 1 6 e1 2 5http://dx
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.