We investigated localization of brief visual targets during reflexive eye movements (optokinetic nystagmus). Subjects mislocalized these targets in the direction of the slow eye movement. This error decreased shortly before a saccade and temporarily increased afterwards. The pattern of mislocalization differs markedly from mislocalization during voluntary eye movements in the presence of visual references, but (spatially) resembles mislocalization during voluntary eye movements in darkness. Because neither reflexive eye movements nor voluntary eye movements in darkness have explicit (visual) goals, these data support the view that visual goals support perceptual stability as an important link between pre- and post-saccadic scenes.
It is widely debated whether fast phases of the reflexive optokinetic nystagmus (OKN) share properties with another class of fast eye movements, visually guided saccades. Conclusions drawn from previous studies were complicated by the fact that a subject's task influences the exact type of OKN: stare vs. look nystagmus. With our current study we set out to determine in the same subjects the exact dynamic properties (main sequence) of various forms of fast eye movements. We recorded fast phases of look and stare nystagmus as well as visually guided saccades. Our data clearly show that fast phases of look and stare nystagmus differ with respect to their main sequence. Fast phases of stare nystagmus were characterized by their lower peak velocities and longer durations as compared to fast phases of look nystagmus. Furthermore we found no differences between fast phases of stare nystagmus evoked with limited and unlimited dot lifetimes. Visually guided saccades were on the same main sequence as fast phases of look nystagmus, while they had higher peak velocities and shorter durations than fast phases of stare nystagmus. Our data underline the critical role of behavioral tasks (e.g., reflexive vs. intentional) for the exact spatiotemporal characteristics of fast eye movements.
Primates perform saccadic eye movements in order to bring the image of an interesting target onto the fovea. Compared to stationary targets, saccades toward moving targets are computationally more demanding since the oculomotor system must use speed and direction information about the target as well as knowledge about its own processing latency to program an adequate, predictive saccade vector. In monkeys, different brain regions have been implicated in the control of voluntary saccades, among them the lateral intraparietal area (LIP). Here we asked, if activity in area LIP reflects the distance between fovea and saccade target, or the amplitude of an upcoming saccade, or both. We recorded single unit activity in area LIP of two macaque monkeys. First, we determined for each neuron its preferred saccade direction. Then, monkeys performed visually guided saccades along the preferred direction toward either stationary or moving targets in pseudo-randomized order. LIP population activity allowed to decode both, the distance between fovea and saccade target as well as the size of an upcoming saccade. Previous work has shown comparable results for saccade direction (Graf and Andersen, 2014a,b). Hence, LIP population activity allows to predict any two-dimensional saccade vector. Functional equivalents of macaque area LIP have been identified in humans. Accordingly, our results provide further support for the concept of activity from area LIP as neural basis for the control of an oculomotor brain-machine interface.
Kaminiarz A, Schlack A, Hoffmann KP, Lappe M, Bremmer F. Visual selectivity for heading in the macaque ventral intraparietal area. J Neurophysiol 112: 2470 -2480, 2014. First published August 13, 2014 doi:10.1152/jn.00410.2014.-The patterns of optic flow seen during self-motion can be used to determine the direction of one's own heading. Tracking eye movements which typically occur during everyday life alter this task since they add further retinal image motion and (predictably) distort the retinal flow pattern. Humans employ both visual and nonvisual (extraretinal) information to solve a heading task in such case. Likewise, it has been shown that neurons in the monkey medial superior temporal area (area MST) use both signals during the processing of self-motion information. In this article we report that neurons in the macaque ventral intraparietal area (area VIP) use visual information derived from the distorted flow patterns to encode heading during (simulated) eye movements. We recorded responses of VIP neurons to simple radial flow fields and to distorted flow fields that simulated self-motion plus eye movements. In 59% of the cases, cell responses compensated for the distortion and kept the same heading selectivity irrespective of different simulated eye movements. In addition, response modulations during real compared with simulated eye movements were smaller, being consistent with reafferent signaling involved in the processing of the visual consequences of eye movements in area VIP. We conclude that the motion selectivities found in area VIP, like those in area MST, provide a way to successfully analyze and use flow fields during self-motion and simultaneous tracking movements.self-motion; primate; parietal cortex; eye movements SELF-MOTION THROUGH AN ENVIRONMENT induces visual, vestibular, tactile, and auditory signals. Neurophysiological research over the last two decades has shown in the animal model, i.e., the macaque monkey, how these signals interact to enhance and disambiguate the perception of heading during self-motion. Two areas of the primate extrastriate and parietal cortex proved to be of specific importance in this context, i.e., the medial superior temporal area (area MST) and the ventral intraparietal area (area VIP). Neurons in area MST respond to visual and vestibular self-motion signals, and their causal role in heading perception has been confirmed (Bremmer et al.
Many neurons in the macaque ventral intraparietal area (VIP) are multimodal, i.e., they respond not only to visual but also to tactile, auditory and vestibular stimulation. Anatomical studies have shown distinct projections between area VIP and a region of premotor cortex controlling head movements. A specific function of area VIP could be to guide movements in order to head for and/or to avoid objects in near extrapersonal space. This behavioral role would require a consistent representation of visual motion within 3-D space and enhanced activity for nearby motion signals. Accordingly, in our present study we investigated whether neurons in area VIP are sensitive to moving visual stimuli containing depth signals from horizontal disparity. We recorded single unit activity from area VIP of two awake behaving monkeys (Macaca mulatta) fixating a central target on a projection screen. Sensitivity of neurons to horizontal disparity was assessed by presenting large field moving images (random dot fields) stereoscopically to the two eyes by means of LCD shutter goggles synchronized with the stimulus computer. During an individual trial, stimuli had one of seven different disparity values ranging from 3° uncrossed- (far) to 3° crossed- (near) disparity in 1° steps. Stimuli moved at constant speed in all simulated depth planes. Different disparity values were presented across trials in pseudo-randomized order. Sixty-one percent of the motion sensitive cells had a statistically significant selectivity for the horizontal disparity of the stimulus (p < 0.05, distribution free ANOVA). Seventy-five percent of them preferred crossed-disparity values, i.e., moving stimuli in near space, with the highest mean activity for the nearest stimulus. At the population level, preferred direction of visual stimulus motion was not affected by horizontal disparity. Thus, our findings are in agreement with the behavioral role of area VIP in the representation of movement in near extrapersonal space.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.