2006
DOI: 10.3758/bf03193658
|View full text |Cite
|
Sign up to set email alerts
|

Hearing a face: Cross-modal speaker matching using isolated visible speech

Abstract: An experiment was performed to test whether cross-modal speaker matches could be made using isolated visible speech movement information. Visible speech movements were isolated using a pointlight technique. In five conditions, subjects were asked to match a voice to one of two (unimodal) speaking point-light faces on the basis of speaker identity. Two of these conditions were designed to maintain the idiosyncratic speech dynamics of the speakers, whereas three of the conditions deleted or distorted the dynamic… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

6
29
1
1

Year Published

2007
2007
2018
2018

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 24 publications
(38 citation statements)
references
References 44 publications
6
29
1
1
Order By: Relevance
“…In fact, this information allows perceivers to match heard speech to lipread speech on the basis of talker identity (e.g., Kamachi, Hill, Lander, & Vatikiotis-Bateson, 2003;Lachs & Pisoni, 2004a, 2004b, 2004cRosenblum et al, 2006). This suggests that speaking style can be perceived across modalities.…”
Section: Methodsmentioning
confidence: 98%
See 2 more Smart Citations
“…In fact, this information allows perceivers to match heard speech to lipread speech on the basis of talker identity (e.g., Kamachi, Hill, Lander, & Vatikiotis-Bateson, 2003;Lachs & Pisoni, 2004a, 2004b, 2004cRosenblum et al, 2006). This suggests that speaking style can be perceived across modalities.…”
Section: Methodsmentioning
confidence: 98%
“…In supporting cross-modal matches, this information might best be construed as amodal or modality neutral. In fact, the notion of amodal talker-specific information has been used to explain the cross-modal talker matching findings described earlier (Kamachi et al, 2003;Lachs & Pisoni, 2004a, 2004b, 2004cRosenblum et al, 2006). The authors of those reports suggest that crossmodal matching could be based on the extraction of common idiolectic information available across modalities.…”
Section: Informational Basis Of Alignmentmentioning
confidence: 97%
See 1 more Smart Citation
“…Recent studies have investigated these phenomena from the viewpoint of human perception and psychophysics [10][11] [12][13] [14]. In these studies, human observers were asked to match an audio recording of an unknown voice X to two video (visual-only) recordings of two unknown speakers, A and B, one of which is X, and vice versa, under a variety of experimental conditions.…”
mentioning
confidence: 99%
“…In these studies, human observers were asked to match an audio recording of an unknown voice X to two video (visual-only) recordings of two unknown speakers, A and B, one of which is X, and vice versa, under a variety of experimental conditions. Lachs et al [11], Rosenblum et al [14] and Kamachi et al [10] reported human observers correctly matching X to A or B around 65% of the times compared to the chance value of 50%. This was shown to be statistically significant given the number of independent test cases considered.…”
mentioning
confidence: 99%