The aim of this paper is to develop a theoretical model and a set of terms for understanding and discussing how we recognize familiar faces, and the relationship between recognition and other aspects of face processing. It is suggested that there are seven distinct types of information that we derive from seen faces; these are labelled pictorial, structural, visually derived semantic, identity-specific semantic, name, expression and facial speech codes. A functional model is proposed in which structural encoding processes provide descriptions suitable for the analysis of facial speech, for analysis of expression and for face recognition units. Recognition of familiar faces involves a match between the products of structural encoding and previously stored structural codes describing the appearance of familiar faces, held in face recognition units. Identity-specific semantic codes are then accessed from person identity nodes, and subsequently name codes are retrieved. It is also proposed that the cognitive system plays an active role in deciding whether or not the initial match is sufficiently close to indicate true recognition or merely a 'resemblance'; several factors are seen as influencing such decisions.This functional model is used to draw together data from diverse sources including laboratory experiments, studies of everyday errors, and studies of patients with different types of cerebral injury. It is also used to clarify similarities and differences between processes responsible for object, word and face recognition.A human face reveals a great deal of information to a perceiver. It can tell about mood and intention and attentiveness, but it can also serve to identify a person. Of course, a person can be identified by other means than the face. Voice, body shape, gait or even clothing may all establish identity in circumstances where facial detail may not be available. Nevertheless, a face is the most distinctive and widely used key to a person's identity, and the loss of ability to recognize faces experienced by some neurological (prosopagnosic) patients has a profound effect on their lives. bibliography compiled by Baron (1979) lists over 200. However, as H. Ellis (1975 pointed out, this considerable empirical activity was not initially accompanied by developments in theoretical understanding of the processes underlying face recognition. It is only comparatively recently that serious theoretical models have been put forward (Bruce, 1979Baron, 1981; H. Ellis, 1981Ellis, , 1983, in press a ; Hay & Young, 1982;Rhodes, 1985; A. Ellis et al., in press).In this paper we present a theoretical framework for face recognition which draws together and extends these recent models. This new framework is used to clarify what we now understand about face recognition, and also to point to where the gaps in our knowledge lie. It is also used to compare and contrast the recognition of people's faces with the recognition of other types of visual stimuli, and to explore ways in which mechanisms involved in human facial re...
Four experiments investigated matching of unfamiliar target faces taken from high-quality video against arrays of photographs. In Experiment 1, targets were present in 50% of arrays. Accuracy was poor and worsened when viewpoint and expression differed between target and array faces. In Experiment 2, targets were present in every array, but performance remained highly error prone. In Experiment 3, short video clips of the targets were shown and replayed as often as necessary, but performance levels were only slightly better than Experiment 2. Experiment 4 showed that matching was dominated by external face features. The results urge caution in the use of video images to identify people who have committed crimes. Superficial impressions of resemblance or dissimilarity between face images can be highly misleading.The human face provides the most reliable means of person identification available to the human eye (although fingerprints and iris patterns may prove more useful for automated identification; e.g., seeDaugman, 1998). Nonethe-
In this paper we describe how the microstructure of the Bruce & Young (1986) functional model of face recognition may be explored and extended using an interactive activation implementation. A simulation of the recognition of familiarity of individuals is developed which accounts for a range of published findings on the effects of semantic priming, repetition priming and distinctiveness. Finally, we offer some speculative predictions made by the model, and point to an empirical programme of research which it suggests.
The face communicates an impressive amount of visual information. We use it to identify its owner, how they are feeling and to help us understand what they are saying. Models of face processing have considered how we extract these kinds of meaning from the face but have ignored another important facial signal -eye gaze. However, recent neurophysiological and developmental studies have sparked some interest in the perception of gaze on the part of cognitive psychologists. In this article we begin by reviewing evidence suggesting that the eyes may constitute a special stimulus in at least two senses. First the structure of the eyes may have evolved to provide us with a particularly powerful signal to the direction in which someone is looking, and second, we may have evolved neural mechanisms devoted to their processing. As a result, gaze direction is analysed rapidly and automatically, and is able to trigger reflexive shifts of an observer's visual attention. Although the eyes are an undoubtedly important cue, understanding where another individual is directing their attention involves more than simply analysing their gaze direction. We go on to describe research with adult participants, children and non-human primates suggesting that other cues such as head orientation and pointing gestures make significant contributions to the computation of another's direction of attention. 3Since the early 1980's, considerable progress has been made in understanding the perceptual, cognitive and neurological processes involved in deriving various different kinds of meaning from the human face 1,2 . For example, we now have a much better understanding of the operations involved in recognising a familiar face, categorising the emotional expression carried by the face, and of how we are able to use the configuration of the lips, teeth and tongue to help us interpret what the owner of a face is saying to us. In their influential model of face processing Bruce and Young 3 proposed that these three types of meaning -identity, expression and facial speech -are extracted in parallel by functionally independent processing systems, a suggestion for which there is now converging empirical support 4 (though see Walker et al. 5 and Schweinberger & Soukup 6 for some complications).However, in common with other cognitive models of face processing, Bruce and Young's account neglected a number of additional facial movements that convey important meaning and make substantial contributions to interpersonal communication. One such signal -gaze -has been widely studied by social psychologists who have long known that it is used in functions such as regulating turn-taking in conversation, expressing intimacy, and exercising social control 7 . Despite this, interest in the perceptual and cognitive processes underlying the analysis of gaze and gaze direction has only emerged in recent years, perhaps stimulated by the work of Perrett 8,9 and Baron-Cohen 10,11 Perrett and his colleagues have proposed a model which is based on neurophysiolog...
We present the results of the first, deep ALMA imaging covering the full 4.5 arcmin 2 of the HUDF imaged with WFC3/IR on HST. Using a 45-pointing mosaic, we have obtained a homogeneous 1.3-mm image reaching σ 1.335 µJy, at a resolution of 0.7 arcsec. From an initial list of 50 > 3.5σ peaks, a rigorous analysis confirms 16 sources with S 1.3 > 120 µJy. All of these have secure galaxy counterparts with robust redshifts ( z = 2.15). Due to the unparalleled supporting data, the physical properties of the ALMA sources are well constrained, including their stellar masses (M * ) and UV+FIR star-formation rates (SFR). Our results show that stellar mass is the best predictor of SFR in the high-redshift Universe; indeed at z ≥ 2 our ALMA sample contains 7 of the 9 galaxies in the HUDF with M * ≥ 2 × 10 10 M , and we detect only one galaxy at z > 3.5, reflecting the rapid drop-off of high-mass galaxies with increasing redshift. The detections, coupled with stacking, allow us to probe the redshift/mass distribution of the 1.3-mm background down to S 1.3 10 µJy. We find strong evidence for a steep star-forming 'main sequence' at z 2, with SFR ∝ M * and a mean specific SFR 2.2 Gyr −1 . Moreover, we find that 85% of total star formation at z 2 is enshrouded in dust, with 65% of all star formation at this epoch occurring in high-mass galaxies (M * > 2 × 10 10 M ), for which the average obscured:unobscured SF ratio is 200. Finally, we revisit the cosmic evolution of SFR density; we find this peaks at z 2.5, and that the star-forming Universe transits from primarily unobscured to primarily obscured at z 4.
Security surveillance systems often produce poor-quality video, and this may be problematic in gathering forensic evidence. We examined the ability of subjects to identify target people captured by a commercially available video security device. In Experiment 1, subjects personally familiar with the targets performed very well at identifying them, but subjects unfamiliar with the targets performed very poorly. Police officers with experience in forensic identification performed as poorly as other subjects unfamiliar with the targets. In Experiment 2, we asked how familiar subjects can perform so well. Using the same video device, we edited clips to obscure the head, body, or gait of the targets. Obscuring body or gait produced a small decrement in recognition performance. Obscuring the targets' heads had a dramatic effect on subjects' ability to recognize the targets. These results imply that subjects recognized the targets' faces, even in these poor-quality images.
Two experiments examined the effect on recognition accuracy and latency of changing the view of faces betwen presentation and test. In Expt 1, all the faces were unfamiliar to the subjects. Faces at test were either unchanged, or changed in angle (e.g. full face to 3/4), expression (e.g. smiling to unsmiling) or both. Unchanged faces were recognized more quickly and accurately than faces with a change in angle or expression which were in turn better than faces with both changed. In Expt 2, half the faces were highly familiar to the subjects, and at test unfamiliar and familiar faces were either unchanged or changed in both angle and expression. Unfamiliar faces were recognized more slowly and less accurately if changed at test, while familiar faces were recognized more slowly though no less accurately if changed (though performance was effectively at ceiling). Familiar faces were recognized more quickly and accurately than unfamiliar, though false positive rates and rejection latencies were similar for familiars and unfamiliars. The results are discussed in terms of the combination of information from ‘pictorial’, ‘structural’, ‘semantic’ and ‘name’ codes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.