In the present study, we investigated whether faces have an advantage in retaining attention over other stimulus categories. In three experiments, subjects were asked to focus on a central go/no-go signal before classifying a concurrently presented peripheral line target. In Experiment 1, the go/no-go signal could be superimposed on photographs of upright famous faces, matching inverted faces, or meaningful objects. Experiments 2 and 3 tested upright and inverted unfamiliar faces, printed names, and another class of meaningful objects in an identical design. A fourth experiment provided a replication of Experiment 1, but with a 1,000-msec stimulus onset asynchrony between the onset of the central face/nonface stimuli and the peripheral targets. In all the experiments, the presence of an upright face significantly delayed target response times, in comparison with each of the other stimulus categories. These results suggest a general attentional bias, so that it is particularly difficult to disengage processing resources from faces.
This study presents the Kent Face Matching Test (KFMT), which comprises 200 same-identity and 20 different-identity pairs of unfamiliar faces. Each face pair consists of a photograph from a student ID card and a high-quality portrait that was taken at least three months later. The test is designed to complement existing resources for face-matching research, by providing a more ecologically valid stimulus set that captures the natural variability that can arise in a person's appearance over time. Two experiments are presented to demonstrate that the KFMT provides a challenging measure of face matching but correlates with established tests. Experiment 1 compares a short version of this test with the optimized Glasgow Face Matching Test (GFMT). In Experiment 2, a longer version of the KFMT, with infrequent identity mismatches, is correlated with performance on the Cambridge Face Memory Test (CFMT) and the Cambridge Face Perception Test (CFPT). The KFMT is freely available for use in face-matching research.
In everyday life, human faces are encountered in many different views. Despite this fact, most psychological research has focused on the perception of frontal faces. To address this shortcoming, the current study investigated how different face views are processed, by measuring eye movements to frontal, mid-profile and profile faces during a gender categorization (Experiment 1) and a free-viewing task (Experiment 2). In both experiments observers initially fixated the geometric center of a face, independent of face view. This center-of-gravity effect induced a qualitative shift in the features that were sampled across different face views in the time period immediately after stimulus onset. Subsequent eye fixations focused increasingly on specific facial features. At this stage, the eye regions were targeted predominantly in all face views, and to a lesser extent also the nose and the mouth. These findings show that initial saccades to faces are driven by general stimulus properties, before eye movements are redirected to the specific facial features in which observers take an interest. These findings are illustrated in detail by plotting the distribution of fixations, first fixations, and percentage fixations across time.
In laboratory studies of visual perception, images of natural scenes are routinely presented on a computer screen. Under these conditions, observers look at the center of scenes first, which might reflect an advantageous viewing position for extracting visual information. This study examined an alternative possibility, namely that initial eye movements are drawn towards the center of the screen. Observers searched visual scenes in a person detection task, while the scenes were aligned with the screen center or offset horizontally (Experiment 1). Two central viewing effects were observed, reflecting early visual biases to the scene and the screen center. The scene effect was modified by person content but is not specific to person detection tasks, while the screen bias cannot be explained by the low-level salience of a computer display (Experiment 2). These findings support the notion of a central viewing tendency in scene analysis, but also demonstrate a bias to the screen center that forms a potential artifact in visual perception experiments.
Previous research has demonstrated an interaction between eye gaze and selected facial emotional expressions, whereby the perception of anger and happiness is impaired when the eyes are horizontally averted within a face, but the perception of fear and sadness is enhanced under the same conditions. The current study reexamined these claims over six experiments. In the first three experiments, the categorization of happy and sad expressions (Experiments 1 and 2) and angry and fearful expressions (Experiment 3) was impaired when eye gaze was averted, in comparison to direct gaze conditions. Experiment 4 replicated these findings in a rating task, which combined all four expressions within the same design. Experiments 5 and 6 then showed that previous findings, that the perception of selected expressions is enhanced under averted gaze, are stimulus and task-bound. The results are discussed in relation to research on facial expression processing and visual attention
Humans attend to faces. This study examines the extent to which attention biases to faces are under top-down control. In a visual cueing paradigm, observers responded faster to a target probe appearing in the location of a face cue than of a competing object cue (Experiments 1a and 2a). This effect could be reversed when faces were negatively predictive of the likely target location, making it beneficial to attend to the object cues (Experiments 1b and 2b). It was easier still to strategically shift attention to predictive face cues (Experiment 2c), indicating that the endogenous allocation of attention was augmented here by an additional effect. However, faces merely delayed the voluntary deployment of attention to object cues, but they could not prevent it, even at short cue-target intervals. This finding suggests that attention biases for faces can be rapidly countered by an observer's endogenous control.
It can be remarkably difficult to determine whether two photographs of unfamiliar faces depict the same person or two different people. This fallibility is well established in the face perception and eyewitness domain, but most of this research has focused on the "average" observer by measuring mean performance across groups of participants. This study deviated from this convention to provide a detailed description of individual differences and observer consistency in unfamiliar face identification by assessing performance repeatedly, across a 3-day (Experiment 1) and a 5-day period (Experiment 2). Both experiments reveal considerable variation between but also within observers. This variation is such that the same observers frequently made different identification decisions to the same faces on different days (Experiment 1). And when new faces were shown on each day, observers that produced perfect accuracy on one day made many misidentifications on another (Experiment 2). However, a few individuals also performed with consistent high accuracy in these tests. These findings suggest that accuracy and consistency are separable indices of face-matching ability, and both measures are necessary to provide a precise index of a person's face processing skill. We discuss whether these measures could provide the basis for a selection tool for occupations that depend on accurate person identification.
The version in the Kent Academic Repository may differ from the final published version. Users are advised to check http://kar.kent.ac.uk for the status of the paper. Users should always cite the published version of record. EnquiriesFor any further enquiries regarding the licence status of this document, please contact: researchsupport@kent.ac.uk If you believe this document infringes copyright then please contact the KAR admin team with the take-down information provided at http://kar.kent.ac.uk/contact.html Alenezi, Hamood and Bindemann, Markus (2013) Citation for published version AbstractIn face matching, observers have to decide if two photographs depict the same person or different people. This is a remarkably difficult task so the current study investigated whether it can be improved when observers receive feedback for their performance. In five experiments, observersÕ initial matching performance was recorded before feedback for their accuracy was administered across three blocks. Improvements were then assessed with faces that had been seen previously with or without feedback and with completely new, previously unseen faces. In all experiments, feedback failed to improve face-matching accuracy. However, trial-by-trial feedback helped to maintain accuracy at baseline level after feedback was withdrawn again, even with new faces (Experiments 1 to 3). By contrast, when no feedback was given throughout the experiment (Experiments 1 to 3) or when outcome feedback was administered at the end of blocks (Experiments 4 and 5), a continuous decline in matching accuracy was found, whereby observers found it increasingly difficult to tell different facial identities apart. A sixth experiment showed that this decline in accuracy continues throughout when the matching task is prolonged substantially. Together, these findings indicate that observers find it increasingly difficult to differentiate faces in matching tasks over time, but trial-by-trial feedback can help to maintain accuracy. The theoretical and practical implications of these findings are discussed.3
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.