We used an equivalent noise (EN) paradigm to examine how the human visual system pools local estimates of direction across space in order to encode global direction. Observers estimated the mean direction (clockwise or counter-clockwise of vertical) of a field of moving band-pass elements whose directions were drawn from a wrapped normal distribution. By measuring discrimination thresholds for mean direction as a function of directional variance, we were able to infer both the precision of observers' representation of each element's direction (i.e., local noise) as well as how many of these estimates they were averaging (i.e., global pooling). We estimated EN for various numbers of moving elements occupying regions of various sizes. We report that both local and global limits on direction integration are determined by the number of elements present in the display (irrespective of their density or the size of region they occupy), and we go on to show how this dependence can be understood in terms of neural noise. Specifically, we use Monte Carlo simulations to show that a maximum-likelihood operator, operating on pooled directional signals from visual cortex corrupted by Poisson noise, accounts for psychophysical data across all conditions tested, as well as motion coherence thresholds (collected under similar experimental conditions). A population vector-averaging scheme (essentially a special case of ML estimation) produces similar predictions but out-performs subjects at high levels of directional variability and fails to predict motion coherence thresholds.
SummaryMany animals use cues from another animal’s gaze to help distinguish friend from foe [1–3]. In humans, the direction of someone’s gaze provides insight into their focus of interest and state of mind [4] and there is increasing evidence linking abnormal gaze behaviors to clinical conditions such as schizophrenia and autism [5–11]. This fundamental role of another’s gaze is buoyed by the discovery of specific brain areas dedicated to encoding directions of gaze in faces [12–14]. Surprisingly, however, very little is known about how others’ direction of gaze is interpreted. Here we apply a Bayesian framework that has been successfully applied to sensory and motor domains [15–19] to show that humans have a prior expectation that other people’s gaze is directed toward them. This expectation dominates perception when there is high uncertainty, such as at night or when the other person is wearing sunglasses. We presented participants with synthetic faces viewed under high and low levels of uncertainty and manipulated the faces by adding noise to the eyes. Then, we asked the participants to judge relative gaze directions. We found that all participants systematically perceived the noisy gaze as being directed more toward them. This suggests that the adult nervous system internally represents a prior for gaze and highlights the importance of experience in developing our interpretation of another’s gaze.
Object boundaries in the natural environment are often defined by changes in luminance; in other cases, however, there may be no difference in average luminance across the boundary, which is instead defined by more subtle 'second-order' cues, such as changes in the contrast of a fine-grained texture. The detection of luminance boundaries may be readily explained in terms of visual cortical neurons, which compute the linear sum of the excitatory and inhibitory inputs to different parts of their receptive field. The detection of second-order stimuli is less well understood, but is thought to involve a separate nonlinear processing stream, in which boundary detectors would receive inputs from many smaller subunits. To address this, we have examined the properties of cortical neurons which respond to both first- and second-order stimuli. We show that the inputs to these neurons are also oriented, but with no fixed orientational relationship to the neurons they subserve. Our results suggest a flexible mechanism by which the visual cortex can detect object boundaries regardless of whether they are defined by luminance or texture.
Previous studies on gaze perception have identified 2 opposing effects of head orientation on perceived gaze direction—1 repulsive and the other attractive. However, the relationship between these 2 effects has remained unclear. By using a gaze categorization task, the current study examined the effect of head orientation on the perceived direction of gaze in a whole-head condition and an eye-region condition. We found that the perceived direction of gaze was generally biased in the opposite direction to head orientation (a repulsive effect). Importantly, the magnitude of the repulsive effect was more pronounced in the eye-region condition than in the whole-head condition. Based on these findings, we developed a dual-route model, which proposes that the 2 opposing effects of head orientation occur through 2 distinct routes. In the framework of this dual-route model, we explain and reconcile the findings from previous studies, and provide a functional account of attractive and repulsive effects and their interaction.
We consider how the detection of second-order contrast structure depends on the orientation and spatial frequency of first-order luminance structure. For patterns composed of a bandpass noise carrier multiplied by a contrast envelope function, we show that sensitivity to the envelope varies in proportion to the spatial frequency of the carrier. For oriented carriers at low spatial-frequencies, detection of the contrast envelope is easier when the envelope and carrier are perpendicular, but this dependency diminishes as the spatial frequency of the carrier increases. These differences are not attributable to either the detection of side-bands, or the presence of spurious contrast structure in unmodulated carrier images. A final experiment measured envelope detection in the presence of noise masks. Results indicate that orientationally and spatially-band pass filtering precedes the detection of second-order structure.
The accurate perception of another person's gaze direction underlies most social interactions and provides important information about his or her future intentions. As a first step to measuring gaze perception, most experiments determine the range of gaze directions that observers judge as being direct: the cone of direct gaze. This measurement has revealed the flexibility of observers' perception of gaze and provides a useful benchmark against which to test clinical populations with abnormal gaze behavior. Here, we manipulated effective signal strength by adding noise to the eyes of synthetic face stimuli or removing face information. We sought to move beyond a descriptive account of gaze categorization by fitting a model to the data that relies on changing the uncertainty associated with an estimate of gaze direction as a function of the signal strength. This model accounts for all the data and provides useful insight into the visual processes underlying normal gaze perception.
Temporal and spatial response to second-order stimuli in cat area 18. J. Neurophysiol. 80: 2811-2823, 1998. Approximately one-half of the neurons in cat area 18 respond to contrast envelope stimuli, consisting of a sinewave carrier whose contrast is modulated by a drifting sinewave envelope of lower spatial frequency. These stimuli should fail to elicit a response from a conventional linear neuron because they are designed to contain no spatial frequency components within the cell's luminance-defined frequency passband. We measured neurons' responses to envelope stimuli by varying both the drift rate and spatial frequency of the contrast modulation. These data were then compared with the same neurons' spatial and temporal properties obtained with luminance-defined sinewave gratings. Most neurons' responses to the envelope stimuli were spatially and temporally bandpass, with bandwidths comparable with those measured with luminance gratings. The temporal responses of these neurons (temporal frequency tuning and latency) were systematically slower when tested with envelope stimuli than with luminance gratings. The simplest kind of model that can accommodate these results is one having separate, parallel streams of bandpass processing for luminance and envelope stimuli.
scite is a Brooklyn-based startup that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2023 scite Inc. All rights reserved.
Made with 💙 for researchers