2014
DOI: 10.1002/hbm.22631
|View full text |Cite
|
Sign up to set email alerts
|

How the human brain exchanges information across sensory modalities to recognize other people

Abstract: Recognizing the identity of other individuals across different sensory modalities is critical for successful social interaction. In the human brain, face- and voice-sensitive areas are separate, but structurally connected. What kind of information is exchanged between these specialized areas during cross-modal recognition of other individuals is currently unclear. For faces, specific areas are sensitive to identity and to physical properties. It is an open question whether voices activate representations of fa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

6
48
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
7

Relationship

2
5

Authors

Journals

citations
Cited by 42 publications
(57 citation statements)
references
References 90 publications
(169 reference statements)
6
48
0
Order By: Relevance
“…r Effects of Face Primes on Voice Recognition r r 2559 r Our behavioral results replicate findings from two earlier studies using similar designs [Ellis et al, 1997;F€ ocker et al, 2011]. Behavioral priming effects of voice primes on face targets [Blank et al, 2015;Ellis et al, 1997] and face identity aftereffects caused by voice adaptors [Hills et al, 2010] have previously been reported, indicating bidirectional multisensory interactions. Crossmodal behavioral priming effects were further not restricted to the domain of person identity processing as they have been shown during the processing of audiovisual affective person information as well [Skuk and Schweinberger, 2013;Watson et al, 2014b].…”
Section: Figuresupporting
confidence: 85%
See 1 more Smart Citation
“…r Effects of Face Primes on Voice Recognition r r 2559 r Our behavioral results replicate findings from two earlier studies using similar designs [Ellis et al, 1997;F€ ocker et al, 2011]. Behavioral priming effects of voice primes on face targets [Blank et al, 2015;Ellis et al, 1997] and face identity aftereffects caused by voice adaptors [Hills et al, 2010] have previously been reported, indicating bidirectional multisensory interactions. Crossmodal behavioral priming effects were further not restricted to the domain of person identity processing as they have been shown during the processing of audiovisual affective person information as well [Skuk and Schweinberger, 2013;Watson et al, 2014b].…”
Section: Figuresupporting
confidence: 85%
“…Another alternative are priming paradigms: It is a wellestablished finding that the repeated presentation of the same stimulus (or of the same stimulus attribute) causes the fMRI signal to decline in brain regions that process that stimulus or that stimulus attribute [Grill-Spector et al, 2006;Henson, 2003;Schacter and Buckner, 1998]. Priming effects for crossmodal prime-target combinations have previously been explored with fMRI [Adam and Noppeney, 2010;Blank et al, 2015;Noppeney et al, 2008;Tal and Amedi, 2009;Watson et al, 2014b], but the effects of face primes on voice recognition have not yet been investigated.…”
Section: Introductionmentioning
confidence: 99%
“…Research in nonhuman primates has shown that cells in anterior-ventral temporal cortex are highly sensitive to particular facial identities as well as to facial familiarity (56,57). Previous studies in humans using intracranial recording (16) or fMRI analyses (15,42,45) have suggested that the ATL can distinguish between different people using their faces, voices, or names. Our study extends these findings by using sophisticated multivariate analyses and a wider range of stimulus categories.…”
Section: Discussionmentioning
confidence: 99%
“…Next, we asked whether the ATL acts as a neural switchboard, performing in concert with other brain regions to enable the retrieval of different facets of person knowledge in a flexible and context-appropriate manner (study 2). We focus on the ATL because multiple lines of evidence from neuropsychology, electrophysiology, and neuroimaging have documented the critical role of the ATL in person identification (4,5,(11)(12)(13)(14)(15)(16), person-related learning (10,(17)(18)(19)(20)(21), semantic memory (6)(7)(8), and abstract social knowledge (1,(22)(23)(24)(25)(26)(27)(28)(29)(30)(31)(32)(33). Individuals with ATL damage due to resection or stroke have multimodal person recognition deficits (34), lose access to stored knowledge about familiar people (35,36), and have difficulties learning information about new people (4,22,37,38).…”
mentioning
confidence: 99%
“…Importantly responses in the FFA appear to support the perceptual processing of voices (Blank, Kiebel, & von Kriegstein, 2015;Schall et al, 2013), providing evidence that the face and voice interact to support identity processing at earlier stages of processing than previously assumed (Bruce & Young, 1986;Burton, Bruce, & Johnston, 1990;Ellis, Jones, & Mosdell, 1997). More traditional models of person recognition propose that the face and voice undergo extensive unisensory processing and only interact to support recognition at supramodal, i.e., post-perceptual, stages of processing (Burton et al, 1990;Bruce & Young, 1986;see Blank, Wieland, and von Kriegstein, 2014;Barton and Corrow, 2016;Quaranta et al, 2016, for more recent reviews of these models).…”
Section: Voice-identity Processing: Audio-visual Interactions In the mentioning
confidence: 87%