We present an expanded version of a widely used measure of unfamiliar face matching ability, the Glasgow Face Matching Test (GFMT). The GFMT2 is created using the same source database as the original test but makes five key improvements. First, the test items include variation in head angle, pose, expression and subject-to-camera distance, making the new test more difficult and more representative of challenges in everyday face identification tasks. Second, short and long versions of the test each contain two forms that are calibrated to be of equal difficulty, allowing repeat tests to be performed to examine effects of training interventions. Third, the short form tests contain no repeating face identities, thereby removing any confounding effects of familiarity that may have been present in the original test. Fourth, separate short versions are created to target exceptionally high performing or exceptionally low performing individuals using established psychometric principles. Fifth, all tests are implemented in an executable program, allowing them to be administered automatically. All tests are available free for scientific use via www.gfmt2.org.
Accurately recognising faces enables social interactions. In recent years it has become clear that people's accuracy differs markedly depending on viewer's familiarity with a face and their individual skill, but the cognitive and neural bases of these accuracy differences are not understood. We examined cognitive representations underlying these accuracy differences by measuring similarity ratings to natural facial image variation. Natural variation was sampled from uncontrolled images on the internet to reflect the appearance of faces as they are encountered in daily life. Using image averaging, and inspired by the computation of Analysis of Variance, we partitioned this variation into differences between faces (betweenidentity variation) and differences between photos of the same face (within-identity variation). This allowed us to compare modulation of these two sources of variation attributable to: (i) a person's familiarity with a face and, (ii) their face recognition ability. Contrary to prevailing accounts of human face recognition and perceptual learning, we found that modulation of within-identity variation -rather than between-identity variation -was associated with high accuracy. First, familiarity modulated similarity ratings to withinidentity variation more than to between-face variation. Second, viewers that are extremely accurate in face recognition -'super-recognisers' -differed from typical perceivers mostly in their ratings of within-identity variation, compared to between-identity variation. In a final computational analysis, we found evidence that transformations of between-and withinidentity variation make separable contributions to perceptual expertise in face recognition. We conclude that inter-and intra-individual accuracy differences primarily arise from differences in the representation of within-identity image variation.
Perceptual processes underlying individual differences in face-recognition ability remain poorly understood. We compared visual sampling of 37 adult super-recognizers—individuals with superior face-recognition ability—with that of 68 typical adult viewers by measuring gaze position as they learned and recognized unfamiliar faces. In both phases, participants viewed faces through “spotlight” apertures that varied in size, with face information restricted in real time around their point of fixation. We found higher accuracy in super-recognizers at all aperture sizes—showing that their superiority does not rely on global sampling of face information but is also evident when they are forced to adopt piecemeal sampling. Additionally, super-recognizers made more fixations, focused less on eye region, and distributed their gaze more than typical viewers. These differences were most apparent when learning faces and were consistent with trends we observed across the broader ability spectrum, suggesting that they are reflective of factors that vary dimensionally in the broader population.
Faces are key to everyday social interactions, but our understanding of social attention is based on experiments that present images of faces on computer screens. Advances in wearable eye-tracking devices now enable studies in unconstrained natural settings but this approach has been limited by manual coding of fixations. Here we introduce an automatic ‘dynamic region of interest’ approach that registers eye-fixations to bodies and faces seen while a participant moves through the environment. We show that just 14% of fixations are to faces of passersby, contrasting with prior screen-based studies that suggest faces automatically capture visual attention. We also demonstrate the potential for this new tool to help understand differences in individuals’ social attention, and the content of their perceptual exposure to other people. Together, this can form the basis of a new paradigm for studying social attention ‘in the wild’ that opens new avenues for theoretical, applied and clinical research.
Human faces convey a collection of information, such as gender, identity, and emotional states. Therefore, understanding the differences between volunteers’ eye movements on benchmark tests of face recognition and perception can explicitly indicate the most discriminating regions to improve performance in this visual cognitive task. The aim of this work is to qualify and classify these eye strategies using multivariate statistics and machine learning techniques, achieving up to 94.8% accuracy. Our experimental results show that volunteers have focused their visual attention, on average, at the eyes, but those with superior performance in the tests carried out have looked at the nose region more closely.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.