2016
DOI: 10.3758/s13414-015-1045-8
|View full text |Cite
|
Sign up to set email alerts
|

Matching novel face and voice identity using static and dynamic facial images

Abstract: Research investigating whether faces and voices share common source identity information has offered contradictory results. Accurate face–voice matching is consistently above chance when the facial stimuli are dynamic, but not when the facial stimuli are static. We tested whether procedural differences might help to account for the previous inconsistencies. In Experiment 1, participants completed a sequential two-alternative forced choice matching task. They either heard a voice and then saw two faces or saw a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

12
66
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 47 publications
(78 citation statements)
references
References 42 publications
12
66
0
Order By: Relevance
“…1 This ability has also been observed in other primates [34]. These findings led to the question about to what extent people are able to correctly match which unfamiliar voice and face belong to the same person [16,20,23,36]. Early work [16,20] argued that people could match voices to dynamically articulating faces but not to static photographs.…”
Section: Related Workmentioning
confidence: 91%
“…1 This ability has also been observed in other primates [34]. These findings led to the question about to what extent people are able to correctly match which unfamiliar voice and face belong to the same person [16,20,23,36]. Early work [16,20] argued that people could match voices to dynamically articulating faces but not to static photographs.…”
Section: Related Workmentioning
confidence: 91%
“…The relative ease of associating voices with faces may be due to the lifetime of experience in being exposed to bimodal stimulation of faces and voices that may subsequently allow voices to have privileged access to faces during learning that then enhances person identification (e.g., Barenholtz et al 2014;von Kriegstein et al, 2008). Recent studies suggest that even unfamiliar voices and faces can be relatively easily matched (Kamachi, Hill, Lander, & Vatikiotis-bateson, 2003;Mavica & Barenholtz, 2012;Schweinberger, Robertson, & Kaufmann, 2007;Smith et al, 2016aSmith et al, , 2016b, suggesting that these cues may be integrated without semantic knowledge of the person. Note that the difference between the results obtained in Experiment 1 and Experiment 2 cannot be explained by the existence of identity-based, crossmodal redundancies shared solely between faces and voices and not between faces and sounds as in our design we randomly paired faces to voices.…”
Section: Discussionmentioning
confidence: 99%
“…Specifically, we found that unfamiliar faces, which had been previously paired with distinctive voices during a learning session, were subsequently better remembered than faces learned with a typical voice, although no such benefit was found with non-vocal sounds. Furthermore, other studies showing greater than chance performance in matching unfamiliar voices to unfamiliar faces (e.g., Kamachi, Hill, Lander, & Vatikiotis-bateson;Mavica & Barenholtz, 2012;Smith, Dunn, Baguley, & Stacey, 2016a, 2016b suggests that redundant, multisensory information cues can enhance person perception in the absence of semantic knowledge.…”
mentioning
confidence: 99%
“…When we listen to a person speaking without seeing his/her face, on the phone, or on the radio, we often build a mental model for the way the person looks [25,45]. There is a strong connection between speech and appearance, part of which is a direct result of the mechanics of speech production: age, gender (which affects the pitch of our voice), the shape of the mouth, facial bone structure, thin or full lips-all can affect the sound we generate.…”
Section: Introductionmentioning
confidence: 99%