Due to extensive homologies, monkeys provide a sophisticated animal model of human visual attention. However, for electrophysiological recording in behaving animals simplified stimuli and controlled eye position are traditionally used. To validate monkeys as a model for human attention during realistic free viewing, we contrasted human (n = 5) and monkey (n = 5) gaze behavior using 115 natural and artificial video clips. Monkeys exhibited broader ranges of saccadic endpoints and amplitudes and showed differences in fixation and intersaccadic intervals. We compared tendencies of both species to gaze toward scene elements with similar low-level visual attributes using two computational models--luminance contrast and saliency. Saliency was more predictive of both human and monkey gaze, predicting human saccades better than monkey saccades overall. Quantifying interobserver gaze consistency revealed that while humans were highly consistent, monkeys were more heterogeneous and were best predicted by the saliency model. To address these discrepancies, we further analyzed high-interest gaze targets--those locations simultaneously chosen by at least two monkeys. These were on average very similar to human gaze targets, both in terms of specific locations and saliency values. Although substantial quantitative differences were revealed, strong similarities existed between both species, especially when focusing analysis onto high-interest targets.
Models of visual attention postulate the existence of a saliency map whose function is to guide attention and gaze to the most conspicuous regions in a visual scene. Although cortical representations of saliency have been reported, there is mounting evidence for a subcortical saliency mechanism, which pre-dates the evolution of neocortex. Here, we conduct a strong test of the saliency hypothesis by comparing the output of a well-established computational saliency model with the activation of neurons in the primate superior colliculus (SC), a midbrain structure associated with attention and gaze, while monkeys watched video of natural scenes. We find that the activity of SC superficial visual-layer neurons (SCs), specifically, is well-predicted by the model. This saliency representation is unlikely to be inherited from fronto-parietal cortices, which do not project to SCs, but may be computed in SCs and relayed to other areas via tectothalamic pathways.
Color is important for segmenting objects from backgrounds, which can in turn facilitate visual search in complex scenes. However, brain areas involved in orienting the eyes toward colored stimuli in our environment are not believed to have access to color information. Here, we show that neurons in the intermediate layers of the monkey superior colliculus (SC), a critical structure for the production of saccadic eye movements, can respond to isoluminant color stimuli with the same magnitude as a maximum contrast luminance stimulus. In contrast, neurons from the superficial SC layers showed little color-related activity. Crucially, visual onset latencies were 30-35 ms longer for color, implying that luminance and chrominance information reach the SC through distinct pathways and that the observed colorrelated activity is not the result of residual luminance signals. Furthermore, these differences in visual onset latency translated directly into differences in saccadic reaction time. The results demonstrate that the saccadic system can signal the presence of chromatic stimuli only one stage from the brainstem premotor circuitry that drives the eyes.
Marino RA, Rodgers CK, Levy R, Munoz DP. Spatial relationships of visuomotor transformations in the superior colliculus map. J Neurophysiol 100: 2564 -2576, 2008. First published August 27, 2008 doi:10.1152/jn.90688.2008. The oculomotor system is well understood compared with other motor systems; however, we do not yet know the spatial details of sensory to motor transformations. This study addresses this issue by quantifying the spatial relationships between visual and motor responses in the superior colliculus (SC), a midbrain structure involved in the transformation of visual information into saccadic motor command signals. We collected extracellular single-unit recordings from 150 visual-motor (VM) and 28 motor (M) neurons in two monkeys trained to perform a nonpredictive visually guided saccade task to 110 possible target locations. Motor related discharge was greater than visual related discharge in 94% (141/150) of the VM neurons. Across the population of VM neurons, the mean locations of the peak visual and motor responses were spatially aligned. The visual response fields (RFs) were significantly smaller than and usually contained within the motor RFs. Converting RFs into the SC coordinate system significantly reduced any misalignment between peak visual and motor locations. RF size increased with increasing eccentricity in visual space but remained invariant on the SC map beyond 1 mm of the rostral pole. RF shape was significantly more symmetric in SC map coordinates compared with visual space coordinates. These results demonstrate that VM neurons specify the same location of a target stimulus in the visual field as the intended location of an upcoming saccade with minimal misalignment to downstream structures. The computational consequences of spatially transforming visual field coordinates to the SC map resulted in increased alignment and spatial symmetry during visual-sensory to saccadic-motor transformations.
Here we examined the influence of the visual response in the superior colliculus (SC) (an oculomotor control structure integrating sensory, motor and cognitive signals) on the development of the motor command that drives saccadic eye movements in monkeys. We varied stimulus luminance to alter the timing and magnitude of visual responses in the SC and examined how these changes correlated with resulting saccade behavior. Increasing target luminance resulted in multiple modulations of the visual response, including increased magnitude and decreased response onset latency. These signal modulations correlated strongly with changes in saccade latency and metrics, indicating that these signal properties carry through to the neural computations that determine when, where and how fast the eyes will move. Thus, components of the earliest part of the visual response in the SC provide important building blocks for the neural basis of the sensory-motor transformation, highlighting a critical link between the properties of the visual response and saccade behavior.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.