Debate continues as to the automaticity of the amygdala's response to threat. Accounts taking a strong automaticity line suggest that the amygdala's response to threat is both involuntary and independent of attentional resources. Building on these accounts, prominent models have suggested that anxiety modulates the output of an amygdala-based preattentive threat evaluation system. Here, we argue for a modification of these models. Functional magnetic resonance imaging data were collected while volunteers performed a letter search task of high or low perceptual load superimposed on fearful or neutral face distractors. Neither high- nor low-anxious volunteers showed an increased amygdala response to threat distractors under high perceptual load, contrary to a strong automaticity account of amygdala function. Under low perceptual load, elevated state anxiety was associated with a heightened response to threat distractors in the amygdala and superior temporal sulcus, whereas individuals high in trait anxiety showed a reduced prefrontal response to these stimuli, consistent with weakened recruitment of control mechanisms used to prevent the further processing of salient distractors. These findings suggest that anxiety modulates processing subsequent to competition for perceptual processing resources, with state and trait anxiety having distinguishable influences upon the neural mechanisms underlying threat evaluation and "top-down" control.
Photo-ID is widely used in security settings, despite research showing that viewers find it very difficult to match unfamiliar faces. Here we test participants with specialist experience and training in the task: passport-issuing officers. First, we ask officers to compare photos to live ID-card bearers, and observe high error rates, including 14% false acceptance of ‘fraudulent’ photos. Second, we compare passport officers with a set of student participants, and find equally poor levels of accuracy in both groups. Finally, we observe that passport officers show no performance advantage over the general population on a standardised face-matching task. Across all tasks, we observe very large individual differences: while average performance of passport staff was poor, some officers performed very accurately – though this was not related to length of experience or training. We propose that improvements in security could be made by emphasising personnel selection.
We are able to recognise familiar faces easily across large variations in image quality, though our ability to match unfamiliar faces is strikingly poor. Here we ask how the representation of a face changes as we become familiar with it. We use a simple imageaveraging technique to derive abstract representations of known faces. Using Principal Components Analysis, we show that computational systems based on these averages consistently outperform systems based on collections of instances. Furthermore, the quality of the average improves as more images are used to derive it. These simulations are carried out with famous faces, over which we had no control of superficial image characteristics. We then present data from three experiments demonstrating that image averaging can also improve recognition by human observers. Finally, we describe how PCA on image averages appears to preserve identity-specific face information, while eliminating non-diagnostic pictorial information. We therefore suggest that this is a good candidate for a robust face representation.2 Robust representations for face recognition: the power of averages
Research in face recognition has tended to focus on discriminating between individuals, or Ôtelling people apartÕ. It has recently become clear that it is also necessary to understand how images of the same person can vary, or Ôtelling people togetherÕ. Learning a new face, and tracking its representation as it changes from unfamiliar to familiar, involves an abstraction of the variability in different images of that personÕs face. Here we present an application of Principal Components Analysis computed across different photos of the same person. We demonstrate that people vary in systematic ways, and that this variability is idiosyncraticÑthe dimensions of variability in one face do not generalise well to another. Learning a new face therefore entails learning how that face varies. We present evidence for this proposal, and suggest that it provides an explanation for various effects in face recognition. We conclude by making a number of testable predictions derived from this framework.
SummaryElectrophysiological recording in the anterior superior temporal sulcus (STS) of monkeys has demonstrated separate cell populations responsive to direct and averted gaze [1, 2]. Human functional imaging has demonstrated posterior STS activation in gaze processing, particularly in coding the intentions conveyed by gaze [3–6], but to date has provided no evidence of dissociable coding of different gaze directions. Because the spatial resolution typical of group-based fMRI studies (∼6–10 mm) exceeds the size of cellular patches sensitive to different facial characteristics (1–4 mm in monkeys), a more sensitive technique may be required. We therefore used fMRI adaptation, which is considered to offer superior resolution [7], to investigate whether the human anterior STS contains representations of different gaze directions, as suggested by non-human primate research. Subjects viewed probe faces gazing left, directly ahead, or right. Adapting to leftward gaze produced a reduction in BOLD response to left relative to right (and direct) gaze probes in the anterior STS and inferior parietal cortex; rightward gaze adaptation produced a corresponding reduction to right gaze probes. Consistent with these findings, averted gaze in the adapted direction was misidentified as direct. Our study provides the first human evidence of dissociable neural systems for left and right gaze.
We are usually able to recognise novel instances of familiar faces with little difficulty, yet recognition of unfamiliar faces can be dramatically impaired by natural within-person variability in appearance. In a card-sorting task for facial identity, different photos of the same unfamiliar face are often seen as different people (Jenkins, White, van Montfort & Burton, 2011). Here we report two card sorting experiments in which we manipulate whether participants know the number of identities present. Without constraints, participants sort faces into many identities. However, when told the number of identities present, they are highly accurate. This minimal contextual information appears to support viewers in 'telling faces together'. In Experiment 2 we show that exposure to within-person variability in the sorting task improves performance in a subsequent face-matching task.This appears to offer a fast route to learning generalizable representations of new faces.
Face recognition is used to prove identity across a wide variety of settings. Despite this, research consistently shows that people are typically rather poor at matching faces to photos. Some professional groups, such as police and passport officers, have been shown to perform just as poorly as the general public on standard tests of face recognition. However, face recognition skills are subject to wide individual variation, with some people showing exceptional ability—a group that has come to be known as ‘super-recognisers’. The Metropolitan Police Force (London) recruits ‘super-recognisers’ from within its ranks, for deployment on various identification tasks. Here we test four working super-recognisers from within this police force, and ask whether they are really able to perform at levels above control groups. We consistently find that the police ‘super-recognisers’ perform at well above normal levels on tests of unfamiliar and familiar face matching, with degraded as well as high quality images. Recruiting employees with high levels of skill in these areas, and allocating them to relevant tasks, is an efficient way to overcome some of the known difficulties associated with unfamiliar face recognition.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.