Research in face recognition has tended to focus on discriminating between individuals, or Ôtelling people apartÕ. It has recently become clear that it is also necessary to understand how images of the same person can vary, or Ôtelling people togetherÕ. Learning a new face, and tracking its representation as it changes from unfamiliar to familiar, involves an abstraction of the variability in different images of that personÕs face. Here we present an application of Principal Components Analysis computed across different photos of the same person. We demonstrate that people vary in systematic ways, and that this variability is idiosyncraticÑthe dimensions of variability in one face do not generalise well to another. Learning a new face therefore entails learning how that face varies. We present evidence for this proposal, and suggest that it provides an explanation for various effects in face recognition. We conclude by making a number of testable predictions derived from this framework.
Face masks present a new challenge to face identification (here matching) and emotion recognition in Western cultures. Here, we present the results of three experiments that test the effect of masks, and also the effect of sunglasses (an occlusion that individuals tend to have more experienced with) on (i) familiar face matching, (ii) unfamiliar face matching and (iii) emotion categorization. Occlusion reduced accuracy in all three tasks, with most errors in the mask condition; however, there was little difference in performance for faces in masks compared with faces in sunglasses. Super-recognizers, people who are highly skilled at matching unconcealed faces, were impaired by occlusion, but at the group level, performed with higher accuracy than controls on all tasks. Results inform psychology theory with implications for everyday interactions, security and policing in a mask-wearing society.
Research on face learning has tended to use sets of images that vary systematically on dimensions such as pose and illumination. In contrast, we have proposed that exposure to naturally varying images of a person may be a critical part of the familiarization process. Here, we present two experiments investigating face learning with “ambient images”—relatively unconstrained photos taken from internet searches. Participants learned name and face associations for unfamiliar identities presented in high or low within-person variability—that is, images of the same person returned by internet search on their name (high variability) versus different images of the same person taken from the same event (low variability). In Experiment 1 we show more accurate performance on a speeded name verification task for identities learned in high than in low variability, when the test images are completely novel photos. In Experiment 2 we show more accurate performance on a face matching task for identities previously learned in high than in low variability. The results show that exposure to a large range of within-person variability leads to enhanced learning of new identities.
Over the last ten years, Oosterhof and Todorov's valence-dominance model has emerged as the most prominent account of how people evaluate faces on social dimensions. In this model, two dimensions (valence and dominance) underpin social judgments of faces. Because this model has primarily been developed and tested in Western regions, it is unclear whether these findings apply to other regions. We addressed this question by replicating Oosterhof and Todorov's methodology across 11 world regions, 41 countries, and 11,570 participants. When we used Oosterhof and Todorov's original analysis strategy, the valence-dominance model generalized across regions. When we used an alternative methodology to allow for correlated dimensions we observed much less generalization. Collectively, these results suggest that, while the valence-dominance model generalizes very well across regions when dimensions are forced to be orthogonal, regional differences are revealed when we use different extraction methods, correlate and rotate the dimension reduction solution.
Research on ensemble encoding has found that viewers extract summary information from sets of similar items. When shown a set of four faces of different people, viewers merge identity information from the exemplars into a representation of the set average. Here, we presented sets containing unconstrained images of the same identity. In response to a subsequent probe, viewers recognized the exemplars accurately. However, they also reported having seen a merged average of these images. Importantly, viewers reported seeing the matching average of the set (the average of the four presented images) more often than a nonmatching average (an average of four other images of the same identity). These results were consistent for both simultaneous and sequential presentation of the sets. Our findings support previous research suggesting that viewers form representations of both the exemplars and the set average. Given the unconstrained nature of the photographs, we also provide further evidence that the average representation is invariant to several high-level characteristics.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.