2004
DOI: 10.1207/s15516709cog2802_8
|View full text |Cite
|
Sign up to set email alerts
|

Spatio‐temporal dynamics of face recognition in a flash: it's in the eyes

Abstract: We adapted the Bubbles procedure [Vis. Res. 41 (2001) 2261] to examine the effective use of information during the first 282 ms of face identification. Ten participants each viewed a total of 5100 faces sub-sampled in space-time. We obtained a clear pattern of effective use of information: the eye on the left side of the image became diagnostic between 47 and 94 ms after the onset of the stimulus; after 94 ms, both eyes were used effectively. This preference for the eyes increased with practice, and was not s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

23
103
1

Year Published

2008
2008
2021
2021

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 105 publications
(131 citation statements)
references
References 38 publications
(22 reference statements)
23
103
1
Order By: Relevance
“…This eye anchoring mechanism is supported by studies showing that the simple detection of a face is impaired by removing the eyes more so than by removing the nose or the mouth (Lewis & Edmonds, 2003). It is also in line with image classification studies showing that eyes are the first feature attended (Vinette et al, 2004). Both the eye anchoring mechanism and the holistic processing are possible based on an upright human face template (see also Rossion, 2009;and Johnson, 2005, for developmental evidence).…”
Section: Inhibition Of Foveated Features By Perifoveal Features Ensursupporting
confidence: 63%
See 1 more Smart Citation
“…This eye anchoring mechanism is supported by studies showing that the simple detection of a face is impaired by removing the eyes more so than by removing the nose or the mouth (Lewis & Edmonds, 2003). It is also in line with image classification studies showing that eyes are the first feature attended (Vinette et al, 2004). Both the eye anchoring mechanism and the holistic processing are possible based on an upright human face template (see also Rossion, 2009;and Johnson, 2005, for developmental evidence).…”
Section: Inhibition Of Foveated Features By Perifoveal Features Ensursupporting
confidence: 63%
“…The eyes seem to be the diagnostic feature used to recognize identity, several facial expressions, and gender (Dupuis-Roy et al, 2009;Schyns et al, 2007). Better expertise in face processing seems to be driven by better information extraction from the eye region (Vinette et al, 2004), a capacity that might go awry in some cases of prosopagnosia in which the eye region is not properly attended (Caldara et al, 2005). Eyes provide essential cues to others' attention and intention through gaze perception, putting them at the core of social cognition and its impairments as seen in Autism Spectrum Disorder (Itier & Batty, 2009 for a review).…”
Section: Introductionmentioning
confidence: 99%
“…the left eye) tend to become diagnostic earlier than their counterparts within the right hemiface (Schyns et al 2002;Vinette et al 2004). Taken together, it seems that we can allocate attention quicker or are more sensitive to local facial cues contained in the left hemiface.…”
Section: Introductionmentioning
confidence: 95%
“…Research on facial recognition and face learning shows that the eyes contain a great deal of diagnostic information for making identity judgments (Schyns, Bonnar & Gosselin, 2002;Vinette, Gosselin & Schyns, 2004). The eyes are also preferentially fixated during facial recognition (Henderson, Williams & Falk, 2005;Barton et al 2006;Althoff & Cohen, 1999) and face learning tasks (Henderson, Williams & Falk, 2005).…”
Section: Nih-pa Author Manuscriptmentioning
confidence: 99%
“…However, the exposure to the talker in the Same Talker condition in this study was presumably too brief to produce significant speech-related learning. It is possible that the subtle change in visual information gathering strategy when presented with a different talker on every trial was due to increased effort by subjects to gather visual speech and identity information to integrate with the auditory speech and identity information.Research on facial recognition and face learning shows that the eyes contain a great deal of diagnostic information for making identity judgments (Schyns, Bonnar & Gosselin, 2002;Vinette, Gosselin & Schyns, 2004). The eyes are also preferentially fixated during facial recognition (Henderson, Williams & Falk, 2005;Barton et al 2006;Althoff & Cohen, 1999) and face learning tasks (Henderson, Williams & Falk, 2005).…”
mentioning
confidence: 99%