Since the 1970s there has been a continuing interest in how people recognise familiar faces (Bruce, 1979; Ellis, 1975). This work has complemented investigations of how unfamiliar faces are processed and the findings from these two strands of research have given rise to accounts that propose qualitatively different forms of representation for familiar and unfamiliar faces. Evidence to suggest that we process familiar and unfamiliar faces in different ways is available from cognitive neuropsychology, brain scanning, and psychophysics. However, in this review we focus on the evidence, available from experimental investigations of how people recognise faces, for different types of representation existing for each type of face. Factors affecting recognition are evaluated in terms of how they apply to familiar and unfamiliar faces and categorised according to the nature of their impact. In the final section this evidence, along with recent advances in the field, is used to explore the way in which unfamiliar faces may become familiar and the factors that may be important for the development of familiar face representations.
The recognition of faces has been the focus of an extensive body of research, whereas the preliminary and prerequisite task of detecting a face has received limited attention from psychologists. Four experiments are reported that address the question how we detect a face. Experiment 1 reveals that we use information from the scene to aid detection. In experiment 2 we investigated which features of a face speed the detection of faces. Experiment 3 revealed inversion effects and an interaction between the effects of blurring and reduction of contrast. In experiment 4 the sizes of effects of reversal of orientation, luminance, and hue were compared. Luminance was found to have the greatest effect on reaction time to detect faces. The results are interpreted as suggesting that face detection proceeds by a pre-attentive stage that identifies possible face regions, which is followed by a focused-attention stage that employs a deformable template. Comparisons are drawn with automatic face-detection systems.
A considerable amount of research has shown that inverting a face disrupts the recognition of that faceöan effect which is disproportionate to that of inverting other objects, such as houses or aeroplanes (Yin 1969). There is a variety of evidence to suggest that it is information about the configuration of facial features (their relative arrangement to each other within a face) that is disrupted by inversion, and that inversion is more disruptive to the processing of configural information than to that of featural information. Searcy and Bartlett (1996) found effects of inversion on a simultaneouscomparison task with spatially distorted and featurally distorted faces. Inversion significantly hindered participants' ability to decide, within a given time frame, that a pair of spatially distorted faces were the same or different; but this effect was not found with featurally distorted pairs, and responses made within this time feature (3 s) were longer for detecting configural differences than for detecting featural changes. There is, therefore, evidence to suggest that the processing of upright, normal faces is largely dependent on configural processing, whereas inverted faces are thought to require a more featural means of processing (see also Bartlett and Searcy 1993;Rhodes et al 1993;Lewis and Johnston 1997).It is important to note that, although there is a wide range of evidence to support the notion that two types of encoding öconfigural and featuralöare involved in face perception, a number of different terms have been used to refer to different definitions of these types of information. Terms such as`second-order relational information', configural' information, and`holistic' information have referred to configural information as being the combination of components that make up an individual face (eg Sergent 1984), or the configuration formed by the individual arrangements of facial features (eg Diamond and Carey 1986; Bartlett and Searcy 1993). Nevertheless, featural information is generally regarded as the presence of a particular feature or type of feature and the encoding of these parts independent of their context (Diamond and Carey 1986), whereas configural information is gained from the relative arrangement ofThe effect of rotation on configural encoding in a face-matching task Perception, 2007, volume Abstract. Inversion disrupts encoding of faces because of the disruption of configural encoding as evident in the Thatcher illusion (Thompson 1980, Perception 9 483^484). Here we consider the effect of rotation on the loss of configural encoding in a same/different matching paradigm. Participants decided whether two faces were of the same type (both normal or both Thatcherised) or not, at five angles of rotation (08, 458, 908, 1358, 1808). When the faces were both of the same person, the disruption due to rotation for`same-type' judgments was linear and approximately equal for normal and Thatcherised face pairs. In experiment 2, with different-person face pairs, the effect of rotation was much great...
According to some accounts of face recognition (e.g., Bruce & Young, 1986), gender analysis occurs independently of identity analysis and, as a consequence, no influence of familiarity should be found on the time taken to perform sex decisions. Results of recent behavioural studies cast doubt upon this claim. Two experiments are reported that explore the influence of familiarity on sex decisions to faces (Experiment 1) and surnames (Experiment 2) of different levels of familiarity. In Experiment 1, participants were able to assign sex faster to highly familiar faces than they were to unfamiliar faces. Therefore, familiarity can influence the speed at which sex is analysed from faces. Similarly, in Experiment 2, participants were able to assign sex and familiarity faster to highly familiar surnames than they were to moderately familiar surnames. These findings are discussed in relation to the influence of sex information from identity-specific semantics and an explanation is offered based on the Burton, Bruce, and Johnston (1990) IAC model of face recognition.Not only is the observation of faces crucial when we need to distinguish people we know from people we do not, it is also a source of other types of valuable information which we are able to extract easily and effectively. Faces allow us to assign sex, infer emotions, estimate age, conduct various types of social interaction, and more clearly understand speech. Importantly, we can process all of the preceding despite a person remaining unfamiliar to us (Johnston &
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.