Contours and textures are important attributes of object surfaces, and are often described by combinations of local orientations in visual images. To elucidate the neural mechanisms underlying contour and texture processing, we examined receptive field (RF) structures of neurons in visual area V2 of the macaque monkey for encoding combinations of orientations. By measuring orientation tuning at several locations within the classical RF, we found that a majority (70%) of V2 neurons have similar orientation tuning throughout the RF. However, many others have RFs containing subregions tuned to different orientations, most commonly about 90 degrees apart. By measuring interactions between two positions within the RF, we found that approximately one-third of neurons show inhibitory interactions that make them selective for combinations of orientations. These results indicate that V2 neurons could play an important role in analyzing contours and textures and could provide useful cues for surface segmentation.
. It is widely presumed that throughout the primate visual pathway neurons encode the relative luminance of objects (at a given light adaptation level) using two classes of monotonic function, one positively and the other negatively sloped. Based on computational considerations, we hypothesized that early visual cortex also contains neurons preferring intermediate relative luminance values. We tested this hypothesis by recording from single neurons in areas V1 and V2 of alert, fixating macaque monkeys during presentation of a large, spatially uniform patch oscillating slowly in luminance and surrounded by a static texture background. A substantial subset of neurons responsive to such low spatial frequency luminance stimuli in both areas exhibited prominent and statistically reliable response peaks to intermediate rather than minimal or maximal luminance values. When presented with static patches of different luminance but of the same spatial configuration, most neurons tested retained a preference for intermediate relative luminance. Control experiments using luminance modulation at multiple low temporal frequencies or reduced amplitude indicate that in the slow luminance-oscillating paradigm, responses were more strongly modulated by the luminance level than the rate of luminance change. These results strongly support our hypothesis and reveal a striking cortical transformation of luminance-related information that may contribute to the perception of surface brightness and lightness. In addition, we tested many luminance-sensitive neurons with large chromatic patches oscillating slowly in luminance. Many cells, including the gray-preferring neurons, exhibited strong color preferences, suggesting a role of luminance-sensitive cells in encoding information in three-dimensional color space.
Peng X, Sereno ME, Silva AK, Lehky SR, Sereno AB. Shape selectivity in primate frontal eye field. J Neurophysiol 100: 796 -814, 2008. First published May 21, 2008 doi:10.1152/jn.01188.2007. Previous neurophysiological studies of the frontal eye field (FEF) in monkeys have focused on its role in saccade target selection and gaze shift control. It has been argued that FEF neurons indicate the locations of behaviorally significant visual stimuli and are not inherently sensitive to specific features of the visual stimuli per se. Here, for the first time, we directly examined single cell responses to simple, two-dimensional shapes and found that shape selectivity exists in a substantial number of FEF cells during a passive fixation task or during the sample, delay (memory), and eye movement periods in a delayed match to sample (DMTS) task. Our data demonstrate that FEF neurons show sensory and mnemonic selectivity for stimulus shape features whether or not they are behaviorally significant for the task at hand. We also investigated the extent and localization of activation in the FEF using a variety of shape stimuli defined by static or dynamic cues employing functional magentic resonance imaging (fMRI) in anesthetized and paralyzed monkeys. Our fMRI results support the electrophysiological findings by showing significant FEF activation for a variety of shape stimuli and cues in the absence of attentional and motor processing. This shape selectivity in FEF is comparable to previous reports in the ventral pathway, inviting a reconsideration of the functional organization of the visual system.
If a peripheral, behaviorally irrelevant cue is followed by a target at the same position, response time for the target is either facilitated or inhibited relative to the response at an uncued position, depending on the delay between target and cue (Posner, 1980; Posner & Cohen, 1984). A few studies have suggested that this spatial cueing effect (termed reflexive spatial attention) is affected by non-spatial cue and target attributes such as orientation or shape. We measured the dependence of the spatial cueing effect on the shapes of the cue and the target for a range of cue onset to target onset asynchronies (CTOAs). When cue and target shapes were different, the spatial cueing effect was facilitatory for short CTOAs and inhibitory for longer CTOAs. The facilitatory spatial effect at short CTOAs was substantially reduced when cue and target shapes were the same. We present a simple neural network to explain our data, providing a unified explanation for the spatial cueing effect and its dependence on shape similarities between the cue and the target. Our modeling suggests that one does not need independent mechanisms to explain both facilitatory and inhibitory spatial cueing effects. Because the neuronal properties (repetition suppression) and the network connectivity (mutual inhibition) of the model are present throughout many visual brain regions, it is possible that reflexive attentional effects may be distributed throughout the brain with different regions expressing different types of reflexive attention depending on their sensitivities to various aspects of visual stimuli.
BackgroundA key aspect of representations for object recognition and scene analysis in the ventral visual stream is the spatial frame of reference, be it a viewer-centered, object-centered, or scene-based coordinate system. Coordinate transforms from retinocentric space to other reference frames involve combining neural visual responses with extraretinal postural information.Methodology/Principal FindingsWe examined whether such spatial information is available to anterior inferotemporal (AIT) neurons in the macaque monkey by measuring the effect of eye position on responses to a set of simple 2D shapes. We report, for the first time, a significant eye position effect in over 40% of recorded neurons with small gaze angle shifts from central fixation. Although eye position modulates responses, it does not change shape selectivity.Conclusions/SignificanceThese data demonstrate that spatial information is available in AIT for the representation of objects and scenes within a non-retinocentric frame of reference. More generally, the availability of spatial information in AIT calls into questions the classic dichotomy in visual processing that associates object shape processing with ventral structures such as AIT but places spatial processing in a separate anatomical stream projecting to dorsal structures.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.