2002
DOI: 10.3758/bf03195788
|View full text |Cite
|
Sign up to set email alerts
|

Visual speech information for face recognition

Abstract: Two experiments test whether isolated visible speech movements can be used for face matching. Visible speech information was isolated with a point-light methodology. Participants were asked to match articulating point-light faces to a fully illuminated articulating face in an XAB task. The first experiment tested single-frame static face stimuli as a control. The results revealed that the participants were significantly better at matching the dynamic face stimuli than the static ones. Experiment 2 tested wheth… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

6
69
0

Year Published

2002
2002
2021
2021

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 46 publications
(75 citation statements)
references
References 54 publications
(87 reference statements)
6
69
0
Order By: Relevance
“…Analogous observations and arguments have been discussed in the visual speech literature (Rosenblum et al, 2002). In research modeled on the sine wave speech work, we found evidence that isolated visible articulatory information can be informative about speakers.…”
mentioning
confidence: 55%
See 3 more Smart Citations
“…Analogous observations and arguments have been discussed in the visual speech literature (Rosenblum et al, 2002). In research modeled on the sine wave speech work, we found evidence that isolated visible articulatory information can be informative about speakers.…”
mentioning
confidence: 55%
“…Research has shown that visual speech information can be recovered from these images (Rosenblum, Johnson, & Saldaña, 1996) and that they are treated as "true" visual speech stimuli in being automatically integrated with auditory speech (McGurk & MacDonald, 1976;. Our more recent research has shown that the isolated visible speech information contained in point-light stimuli can also inform about speaker identity (Rosenblum, Smith, & Niehus, 2006;Rosenblum et al, 2002). Despite the fact that these stimuli do not contain the facial feature and configuration information assumed necessary for recognition, point-light speech can be used for face recognition in both matching (Rosenblum et al, 2002) and identification contexts (Rosenblum et al, 2006; see also Bruce & Valentine, 1988).…”
mentioning
confidence: 99%
See 2 more Smart Citations
“…Previous studies have shown that point-light displays of a talker's articulating face convey enough of the phonetic segmental grain to distinguish many phonemes and words and that this sensory stream is readily integrated with auditory speech (see Rosenblum & Saldaña, 1998, for a review). Moreover, these displays also permit the recognition of a familiar talker's face in silence (Rosenblum, Yakel, Baseer, & Panchal, 2002). Rosenblum et al have argued that subjects were able to recognize familiar faces by using idiolectal properties conveyed visually.…”
Section: Interactions Between Indexical and Linguistic Processing Acrmentioning
confidence: 99%