The face communicates an impressive amount of visual information. We use it to identify its owner, how they are feeling and to help us understand what they are saying. Models of face processing have considered how we extract these kinds of meaning from the face but have ignored another important facial signal -eye gaze. However, recent neurophysiological and developmental studies have sparked some interest in the perception of gaze on the part of cognitive psychologists. In this article we begin by reviewing evidence suggesting that the eyes may constitute a special stimulus in at least two senses. First the structure of the eyes may have evolved to provide us with a particularly powerful signal to the direction in which someone is looking, and second, we may have evolved neural mechanisms devoted to their processing. As a result, gaze direction is analysed rapidly and automatically, and is able to trigger reflexive shifts of an observer's visual attention. Although the eyes are an undoubtedly important cue, understanding where another individual is directing their attention involves more than simply analysing their gaze direction. We go on to describe research with adult participants, children and non-human primates suggesting that other cues such as head orientation and pointing gestures make significant contributions to the computation of another's direction of attention. 3Since the early 1980's, considerable progress has been made in understanding the perceptual, cognitive and neurological processes involved in deriving various different kinds of meaning from the human face 1,2 . For example, we now have a much better understanding of the operations involved in recognising a familiar face, categorising the emotional expression carried by the face, and of how we are able to use the configuration of the lips, teeth and tongue to help us interpret what the owner of a face is saying to us. In their influential model of face processing Bruce and Young 3 proposed that these three types of meaning -identity, expression and facial speech -are extracted in parallel by functionally independent processing systems, a suggestion for which there is now converging empirical support 4 (though see Walker et al. 5 and Schweinberger & Soukup 6 for some complications).However, in common with other cognitive models of face processing, Bruce and Young's account neglected a number of additional facial movements that convey important meaning and make substantial contributions to interpersonal communication. One such signal -gaze -has been widely studied by social psychologists who have long known that it is used in functions such as regulating turn-taking in conversation, expressing intimacy, and exercising social control 7 . Despite this, interest in the perceptual and cognitive processes underlying the analysis of gaze and gaze direction has only emerged in recent years, perhaps stimulated by the work of Perrett 8,9 and Baron-Cohen 10,11 Perrett and his colleagues have proposed a model which is based on neurophysiolog...
The structure of the human face allows it to signal a wide range of useful information about a person's gender, identity, mood, etc. We show empirically that facial identity information is conveyed largely via mechanisms tuned to horizontal visual structure. Specifically observers perform substantially better at identifying faces that have been filtered to contain just horizontal information compared to any other orientation band. We then show, computationally, that horizontal structures within faces have an unusual tendency to fall into vertically co-aligned clusters compared with images of natural scenes. We call these clusters "bar codes" and propose that they have important computational properties. We propose that it is this property makes faces "special" visual stimuli because they are able to transmit information as reliable spatial sequence: a highly constrained one-dimensional code. We show that such structure affords computational advantages for face detection and decoding, including robustness to normal environmental image degradation, but makes faces vulnerable to certain classes of transformation that change the sequence of bars such as spatial inversion or contrast-polarity reversal.
This paper examines how observers estimate the overall orientation of spatially disorganised textures containing variable orientation. Experiments used asymmetrical distributions of orientations to separate the predictions from different models of average orientation estimation. Stimuli were composed of two spatially intermingled sets of oriented patches, each set having Gaussian distributed element orientation. The threshold separation of the means of the two sets was determined for a variety of tasks. Discrimination of these textures from a reference composed of two sets with the same mean orientation was well predicted by discrimination of orientation variability. A single interval judgement of which set contained more elements required a greater separation of the set orientations and suggested that the sets must be resolved in the orientation domain for independent representation of their properties. That resolution is required to perform this task further suggests that orientational skew is not coded. Threshold offsets for judgement of average orientation were re-expressed as shifts of four candidate features for coding the central tendency of texel orientations. Comparison with similar thresholds for single distributions of orientations indicated that average orientation is assigned to the centroid of a set of orientation measures.
The manner in which the spatial characteristics of simple discrimination tasks change with time after the onset of a stimulus were examined. The experiments measured the improvements in sensitivity to the length, orientation, curvature, and stereoscopic depth of short lines that accrue with increased exposure durations. These improvements can be consistently interpreted in terms of a change of the spatial scale of analysis from coarse to fine over a period of at least 1000 msec. Variations in visual resolution acuity over the same period are negligible, and it is concluded that the changes in spatial characteristics concern the range of spatial filters in operation. This range progressively shrinks after stimulus presentation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.