The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2016
DOI: 10.1016/j.cortex.2016.10.013
|View full text |Cite
|
Sign up to set email alerts
|

Amygdala and auditory cortex exhibit distinct sensitivity to relevant acoustic features of auditory emotions

Abstract: Amygdala Temporal voice area Auditory emotions fMRI a b s t r a c tDiscriminating between auditory signals of different affective value is critical to successful social interaction. It is commonly held that acoustic decoding of such signals occurs in the auditory system, whereas affective decoding occurs in the amygdala. However, given that the amygdala receives direct subcortical projections that bypass the auditory cortex, it is possible that some acoustic decoding occurs in the amygdala as well, when the ac… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
26
1

Year Published

2017
2017
2022
2022

Publication Types

Select...
5
2
1

Relationship

2
6

Authors

Journals

citations
Cited by 28 publications
(30 citation statements)
references
References 38 publications
3
26
1
Order By: Relevance
“…These are in agreement with previous reports of brain plasticity and improved cognitive functions like learning, verbal ability and memory after music exposure and training [72][73][74][75]. Acoustic stimuli from music-listening induce neural spike patterns which are transduced via the auditory pathway within milliseconds [76][77][78] and evoke emotions in the limbic system [79]. At the molecular level, neural stimulation is conducted via calcium channel activity and neurotransmitters, which activate immediate early genes (IEG) thereby regulating gene and microRNA expression patterns [80][81][82][83].…”
Section: Discussionsupporting
confidence: 91%
“…These are in agreement with previous reports of brain plasticity and improved cognitive functions like learning, verbal ability and memory after music exposure and training [72][73][74][75]. Acoustic stimuli from music-listening induce neural spike patterns which are transduced via the auditory pathway within milliseconds [76][77][78] and evoke emotions in the limbic system [79]. At the molecular level, neural stimulation is conducted via calcium channel activity and neurotransmitters, which activate immediate early genes (IEG) thereby regulating gene and microRNA expression patterns [80][81][82][83].…”
Section: Discussionsupporting
confidence: 91%
“…We assumed that the human brain might process alarm screams with significant efficiency compared with non-alarm screams as quantified by the level of neural signals in [1] and the connectivity between brain areas that are central to affective sound processing [22,27]. Voice signals and affect bursts are usually processed in a distributed brain network, consisting of the auditory cortex, the amygdala, and the inferior frontal cortex (IFC) [1,22,28], which provide an acoustic and socio-affective analysis of these signals [29,30], respectively. To identify the neural dynamics of scream call processing in this network, we asked humans to listen to the same selected 84 screams as in experiments 2 and 3.…”
Section: Neural Efficiency and Significance For Non-alarm Scream Procmentioning
confidence: 99%
“…The emotion-matching paradigm also strongly recruited the bilateral amygdala. Generally thought of as a module of automatic detection of emotions (Frühholz & Grandjean, 2013a;Öhman, 2002;Pannese et al, 2015Pannese et al, , 2016Phelps & LeDoux, 2005;Vuilleumier, Armony, Driver, & Dolan, 2001), the amygdala is preferentially recruited by angry and fearful faces (Adams, Gordon, Baird, Ambady, & Kleck, 2003;Milesi et al, 2014;Phelps et al, 2001;Repeiski, Smith, Sansom, & Repetski, 1996), which are the predominant stimuli in the emotion-matching paradigm. However, an alternative role for involvement of the amygdala is not as an automatic detector of emotions per se, but rather as a detector of relevant and salient stimuli, of which emotional expressions represent a subclass (Sander et al, 2003).…”
Section: Emotion Matchingmentioning
confidence: 99%
“…For example, the procedure of fear conditioning instills an initially neutral stimulus with the capacity of inducing reactions and behaviors that are biologically relevant (e.g., freezing or fleeing) upon consistent association with an aversive unconditioned stimulus (Pape & Pare, 2010). Furthermore, the amygdala is likely to process diverse emotional expressions, such as facial expressions (Haxby, Hoffman, & Gobbini, 2002;O'Toole, Roark, & Abdi, 2002;Rossion, 2015;Sabatinelli et al, 2011) and vocal prosody features (Fruhholz, Klaas, Patel, & Grandjean, 2015;Frühholz et al, 2015Frühholz et al, , 2016Pannese, Grandjean, & Frühholz, 2015, Pannese et al, 2016 as relevant social signals (Sander et al, 2003). The large variety of cortical and subcortical projections to and from the amygdala provide it with information about the properties of the stimulus as well as the ongoing goals and needs of the organism (J. L. Price, 2003).…”
Section: Introductionmentioning
confidence: 99%