Four experiments investigated matching of unfamiliar target faces taken from high-quality video against arrays of photographs. In Experiment 1, targets were present in 50% of arrays. Accuracy was poor and worsened when viewpoint and expression differed between target and array faces. In Experiment 2, targets were present in every array, but performance remained highly error prone. In Experiment 3, short video clips of the targets were shown and replayed as often as necessary, but performance levels were only slightly better than Experiment 2. Experiment 4 showed that matching was dominated by external face features. The results urge caution in the use of video images to identify people who have committed crimes. Superficial impressions of resemblance or dissimilarity between face images can be highly misleading.The human face provides the most reliable means of person identification available to the human eye (although fingerprints and iris patterns may prove more useful for automated identification; e.g., seeDaugman, 1998). Nonethe-
Security surveillance systems often produce poor-quality video, and this may be problematic in gathering forensic evidence. We examined the ability of subjects to identify target people captured by a commercially available video security device. In Experiment 1, subjects personally familiar with the targets performed very well at identifying them, but subjects unfamiliar with the targets performed very poorly. Police officers with experience in forensic identification performed as poorly as other subjects unfamiliar with the targets. In Experiment 2, we asked how familiar subjects can perform so well. Using the same video device, we edited clips to obscure the head, body, or gait of the targets. Obscuring body or gait produced a small decrement in recognition performance. Obscuring the targets' heads had a dramatic effect on subjects' ability to recognize the targets. These results imply that subjects recognized the targets' faces, even in these poor-quality images.
People are remarkably accurate (approaching ceiling) at deciding whether faces are male or female, even when cues from hair style, makeup, and facial hair are minimised. Experiments designed to explore the perceptual basis of our ability to categorise the sex of faces are reported. Subjects were considerably less accurate when asked to judge the sex of three-dimensional (3-D) representations of faces obtained by laser-scanning, compared with a condition where photographs were taken with hair concealed and eyes closed. This suggests that cues from features such as eyebrows, and skin texture, play an important role in decision-making. Performance with the laser-scanned heads remained quite high with 3/4-view faces, where the 3-D shape of the face should be easiest to see, suggesting that the 3-D structure of the face is a further source of information contributing to the classification of its sex. Performance at judging the sex from photographs (with hair concealed) was disrupted if the photographs were inverted, which implies that the superficial cues contributing to the decision are not processed in a purely 'local' way. Performance was also disrupted if the faces were shown in photographic negatives, which is consistent with the use of 3-D information, since negation probably operates by disrupting the computation of shape from shading. In 3-D, the 'average' male face differs from the 'average' female face by having a more protuberant nose/brow and more prominent chin/jaw. The effects of manipulating the shapes of the noses and chins of the laser-scanned heads were assessed and significant effects of such manipulations on the apparent masculinity or femininity of the heads were revealed. It appears that our ability to make this most basic of facial categorisations may be multiply determined by a combination of 2-D, 3-D, and textural cues and their interrelationships.
Human subjects are able to identify the sex of faces with very high accuracy. Using photographs of adults in which hair was concealed by a swimming cap, subjects performed with 96% accuracy. Previous work has identified a number of dimensions on which the faces of men and women differ. An attempt to combine these dimensions into a single function to classify male and female faces reliably is described. Photographs were taken of 91 male and 88 female faces in full face and profile. These were measured in several ways: (i) simple distances between key points in the pictures; (ii) ratios and angles formed between key points in the pictures; (iii) three-dimensional (3-D) distances derived by combination of full-face and profile photographs. Discriminant function analysis showed that the best discriminators were derived from simple distance measurements in the full face (85% accuracy with 12 variables) and 3-D distances (85% accuracy with 6 variables). Combining measures taken from the picture plane with those derived in 3-D produced a discriminator approaching human performance (94% accuracy with 16 variables). Performance of the discriminant function was compared with that of human perceivers and found to be correlated, but far from perfectly. The difficulty of deriving a reliable function to distinguish between the sexes is discussed with reference to the development of automatic face-processing programs in machine vision. It is argued that such systems will need to incorporate an understanding of the stimuli if they are to be effective.
We investigated whether an asymmetric relationship between the perception of identity and emotional expressions in faces (Schweinberger & Soukup, 1998) may be related to differences in the relative processing speed of identity and expression information. Stimulus faces were morphed across identity within a given emotional expression, or were morphed across emotion within a given identity. In Experiment 1, consistent classifications of these images were demonstrated across a wide range of morphing, with only a relatively narrow category boundary. At the same time, classification reaction times (RTs) reflected the increased perceptual difficulty of the morphed images. In Experiment 2, we investigated the effects of variations in the irrelevant dimension on judgments of faces with respect to a relevant dimension, using a Garner-type speeded classification task. RTs for expression classifications were strongly influenced by irrelevant identity information. In contrast, RTs for identity classifications were unaffected by irrelevant expression information, and this held even for stimuli in which identity was more difficult and slower to discriminate than expression. This suggests that differences in processing speed cannot account for the asymmetric relationship between identity and emotion perception. Theoretical accounts proposing independence of identity and emotion perception are discussed in the light of these findings,
Principal components analysis (PCA) of face images is here related to subjects' performance on the same images. In two experiments subjects were shown a set of faces and asked to rate them for distinctiveness. They were subsequently shown a superset of faces and asked to identify those that had appeared originally. Replicating previous work, we found that hits and false positives (FPs) did not correlate: Those faces easy to identify as being "seen" were unrelated to those faces easy to reject as being "unseen." PCAwas performed on three data sets: (1) face images with eye position standardized, (2) face images morphed to a standard template to remove shape information, and (3) the shape information from faces only.Analyses based on PCAof shape-free faces gave high predictions of FPs, whereas shape information itself contribute'd only to hits. Furthermore, whereas FPs were generally predictable from components early in the PCA, hits appeared to be accounted for by later components. We conclude that shape and "texture" (the image-based information remaining after morphing) may be used separately by the human face processing system, and that PCA of images offers a useful tool for understanding this system. Psychological research on face recognition has tended to divide into two broad approaches. One approach has been to concentrate on cognitive processes following perception and to develop information processing models (see, e.g., Bruce & Young, 1986; Burton, Bruce, & Johnston, 1990;Ellis, 1986;Hay & Young, 1982;Young & Bruce, 1991). This approach has been very successful in delineating the stages involved in face recognition; however, each of these models has assumed some perceptual processing prior to input. Indeed, some information processing models explicitly require input in the form of componential face primitives, but remain uncommitted about the nature of these primitives (see, e.g., Burton, 1994;Farah, O'Reilly, & Vecera, 1993;Valentine, 1991).Other research by psychologists has investigated the perceptual processing offace patterns, demonstrating, for example, how faces seem to be analyzed holistically rather than by being decomposed into discrete local features (see, e.g., Bartlett & Searcy, 1993;Rhodes, Brake, & Atkinson, 1993;Tanaka & Farah, 1993;Young, Hellawell, We are grateful to I. Craw and N. Costen for providing the images used in these experiments and to D. Carson, who ran Experiments I and 2. The manuscript was improved following comments from G. Loftus, M. Reinitz, and P. Dixon. This research was supported by an SERC grant to A.M.B., VB., and I. Craw (No. GRH 93828). Correspondence should be addressed to P.1. B. Hancock,
In recognition memory for unfamiliar faces, performance for target-present items (hits) does not correlate with performance for target-absent items (false positives), a result which runs counter to the more usual mirror effect. In this paper we examinesubjects' performance on fac e matching, a nd demonstrate no relationship-between performance on matching items and performance on nonmatching items. This absence of a mirror effect occurs for multidistractor, 1-in-10 matching tasks (Experiment 1) and for simple paired-item tasks (Experiment 2). In Experiment 3 we demonstrate that matching familiar faces produces a strong mirror effect. However, inverting the familiar faces causes the association to disappear once more (Experiment 4). We argue thatfamiliar and unfamiliar faces are represented in qualitatively different ways.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.