2017
DOI: 10.1002/hbm.23540
|View full text |Cite
|
Sign up to set email alerts
|

Activation in the angular gyrus and in the pSTS is modulated by face primes during voice recognition

Abstract: The aim of the present study was to better understand the interaction of face and voice processing when identifying people. In a S1-S2 crossmodal priming fMRI experiment, the target (S2) was a disyllabic voice stimulus, whereas the modality of the prime (S1) was manipulated blockwise and consisted of the silent video of a speaking face in the crossmodal condition or of a voice stimulus in the unimodal condition. Primes and targets were from the same speaker (person-congruent) or from two different speakers (pe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
9
0
1

Year Published

2017
2017
2023
2023

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 13 publications
(10 citation statements)
references
References 83 publications
(173 reference statements)
0
9
0
1
Order By: Relevance
“…Such tasks are not usually used in neuroimaging studies on voice-identity recognition, which likely explains the paucity of parietal lobe responses in neuroimaging studies on voice processing ( Belin and Zatorre, 2003 ; von Kriegstein et al , 2003 ; Andics et al , 2010 ). However, the results of two neuroimaging studies, which investigated crossmodal voice-face priming and voice-face learning, respectively, are congruent with the suggestion that the right inferior parietal lobe is involved in a representation of person-related voice and face information ( von Kriegstein and Giraud, 2006 ; Holig et al , 2017 ). They showed that the inferior parietal lobe is involved in voice-identity recognition for voices learned with faces, but not with names ( von Kriegstein and Giraud, 2006 ).…”
Section: Discussionmentioning
confidence: 53%
“…Such tasks are not usually used in neuroimaging studies on voice-identity recognition, which likely explains the paucity of parietal lobe responses in neuroimaging studies on voice processing ( Belin and Zatorre, 2003 ; von Kriegstein et al , 2003 ; Andics et al , 2010 ). However, the results of two neuroimaging studies, which investigated crossmodal voice-face priming and voice-face learning, respectively, are congruent with the suggestion that the right inferior parietal lobe is involved in a representation of person-related voice and face information ( von Kriegstein and Giraud, 2006 ; Holig et al , 2017 ). They showed that the inferior parietal lobe is involved in voice-identity recognition for voices learned with faces, but not with names ( von Kriegstein and Giraud, 2006 ).…”
Section: Discussionmentioning
confidence: 53%
“…While previous studies have also shown evidence that the integration of emotion face and voice stimuli occurs in the pSTS (Ethofer et al 2006;Romanski 2007;Kreifelts et al 2009;Watson, Latinus, Noguchi, et al 2014;Hölig et al 2017), other studies have suggested that MSI may occur via direct reciprocal connections between unimodal face and voice regions (von Kriegstein and Giraud 2006). One possible explanation for this discrepancy is due to the fact that the former studies did not independently localize face-and voice-selective regions, and test MSI within these regions.…”
Section: Discussionmentioning
confidence: 92%
“…Influential models of identity recognition suggest 2 largely distinct brain networks for face and voice processing, with a shared multisensory convergence zone for the integration of information across sensory modalities (Bruce and Young 1986;Burton et al 1990;Haxby et al 2000;Watson, Latinus, Charest, et al 2014). Some studies have suggested that the integration of facial and vocal emotional signals occur in multisensory regions like the amygdala, orbitofrontal cortex, and posterior superior temporal sulcus (pSTS) (Romanski 2007;Kreifelts et al 2009;Peelen et al 2010;Watson, Latinus, Charest, et al 2014;Hölig et al 2017); regions separate from the core face-and voice-selective network (Campanella and Belin 2007). However, the absence of independent localizers in these studies makes it difficult to ascertain whether regions showing multisensory integration are located within face-or voice-selective regions or instead occurs within an independent convergence zone.…”
Section: Introductionmentioning
confidence: 99%
“…Influential models of identity recognition suggest two largely distinct brain networks for face and voice processing, with a shared multisensory convergence zone for the integration of information across sensory modalities (Bruce and Young 1986;Burton et al 1990;Haxby et al 2000;Watson, Latinus, Charest, et al 2014). Some studies have suggested that the integration of facial and vocal emotional signals occur in multisensory regions like the amygdala, orbitofrontal cortex, and posterior superior temporal sulcus (STS) (Romanski 2007;Kreifelts et al 2009;Peelen et al 2010;Watson, Latinus, Charest, et al 2014;Hölig et al 2017); regions separate from the core face and voice selective network (Campanella and Belin 2007). However, the absence of independent localisers in these studies makes it difficult to ascertain whether regions showing multisensory integration are located within face-or voice-selective regions, or instead occurs within an independent convergence zone.…”
Section: Introductionmentioning
confidence: 99%