The frontal eye field (FEF) and superior colliculus (SC) are thought to form two parallel systems for generating saccadic eye movements. The SC is thought classically to mediate reflex-like orienting movements. Thus it can be hypothesized that the FEF exerts a higher level control on a visual grasp reflex. To test this hypothesis we have studied the saccades of patients who have had discrete unilateral removals of frontal lobe tissue for the relief of intractable epilepsy. The responses of these patients were compared to those of normal subjects and patients with unilateral temporal lobe removals. Two tasks were used. In the first task the subject was instructed to look in the direction of a visual cue that appeared unexpectedly 12 degrees to the left or right of a central fixation point (FP), in order to identify a patterned target that appeared 200 ms or more later. In the second "anti-saccade" task the subject was required to look not at the location of the cue but in the opposite direction, an equal distance from FP where after 200 ms or more the patterned target appeared. Three major observations have emerged from the present study. Most frontal patients, with lesions involving both the dorsolateral and mesial cortex had long term difficulties in suppressing disallowed glances to visual stimuli that suddenly appeared in peripheral vision. In such patients, saccades that were eventually directed away from the cue and towards the target were nearly always triggered by the appearance of the target itself irrespective of whether or not the "anti-saccade" was preceded by a disallowed glance. Those eye movements away from the cue were only rarely generated spontaneously across the blank screen during the cue-target time interval. The latency of these visually-triggered saccades was very short (80-140 ms) compared to that of the correct saccades (170-200 ms) to the cue when the cue and target were on the same side, thereby suggesting that the structures removed in these patients normally trigger saccades after considerable computations have already been performed. The results support the view that the frontal lobes, particularly the dorsolateral region which contains the FEF and possibly the supplementary motor area contribute to the generation of complex saccadic eye-movement behaviour. More specifically, they appear to aid in suppressing unwanted reflex-like oculomotor activity and in triggering the appropriate volitional movements when the goal for the movement is known but not yet visible.
Gaze, the direction of the visual axis in space, is the sum of the eye position relative to the head (E) plus head position relative to space (H). In the old explanation, which we call the oculocentric motor strategy, of how a rapid orienting gaze shift is controlled, it is assumed that 1) a saccadic eye movement is programmed with an amplitude equal to the target's offset angle, 2) this eye movement is programmed without reference to whether a head movement is planned, 3) if the head turns simultaneously the saccade is reduced in size by an amount equal to the head's contribution, and 4) the saccade is attenuated by the vestibuloocular reflex (VOR) slow phase. Humans have an oculomotor range (OMR) of about +/- 55 degrees. The use of the oculocentric motor strategy to acquire targets lying beyond the OMR requires programming saccades that cannot be made physically. We have studied in normal human subjects rapid horizontal gaze shifts to visible and remembered targets situated within and beyond the OMR at offsets ranging from 30 to 160 degrees. Heads were attached to an apparatus that permitted short unexpected perturbations of the head trajectory. The acceleration and deceleration phases of the head perturbation could be timed to occur at different points in the eye movement. 4. Single-step rapid gaze shifts of all sizes up to at least 160 degrees (the limit studied) could be accomplished with the classic single-eye saccade and an accompanying saccadelike head movement. In gaze shifts less than approximately 45 degrees, when head motion was prevented totally by the brake, the eye attained the target. For larger target eccentricities the gaze shift was interrupted by the brake and the average eye saccade amplitude was approximately 45 degrees, well short of the OMR. Thus saccadic eye movement amplitude was neurally, not mechanically, limited. When the head's motion was not perturbed by the brake, the eye saccade amplitude was a function of head velocity: for a given target offset, the faster the head the smaller the saccade. For gaze shifts to targets beyond the OMR and when head velocity was low, the eye frequently attained the 45 degrees position limit and remained there, immobile, until gaze attained the target.(ABSTRACT TRUNCATED AT 400 WORDS)
Visual neurons typically receive information from a limited portion of the retina, and such receptive fields are a key organizing principle for much of visual cortex. At the same time, there is strong evidence that receptive fields transiently shift around the time of saccades. The nature of the shift is controversial: Previous studies have found shifts consistent with a role for perceptual constancy; other studies suggest a role in the allocation of spatial attention. Here we present evidence that both the previously documented functions exist in individual neurons in primate cortical area V4. Remapping associated with perceptual constancy occurs for saccades in all directions, while attentional shifts mainly occur for neurons with receptive fields in the same hemifield as the saccade end point. The latter are relatively sluggish and can be observed even during saccade planning. Overall these results suggest a complex interplay of visual and extraretinal influences during the execution of saccades.
The purpose of this investigation was to describe the neural constraints on three-dimensional (3-D) orientations of the eye in space (Es), head in space (Hs), and eye in head (Eh) during visual fixations in the monkey and the control strategies used to implement these constraints during head-free gaze saccades. Dual scleral search coil signals were used to compute 3-D orientation quaternions, two-dimensional (2-D) direction vectors, and 3-D angular velocity vectors for both the eye and head in three monkeys during the following visual tasks: radial to/from center, repetitive horizontal, nonrepetitive oblique, random (wide 2-D range), and random with pin-hole goggles. Although 2-D gaze direction (of Es) was controlled more tightly than the contributing 2-D Hs and Eh components, the torsional standard deviation of Es was greater (mean 3.55 degrees ) than Hs (3.10 degrees ), which in turn was greater than Eh (1.87 degrees ) during random fixations. Thus the 3-D Es range appeared to be the byproduct of Hs and Eh constraints, resulting in a pseudoplanar Es range that was twisted (in orthogonal coordinates) like the zero torsion range of Fick coordinates. The Hs fixation range was similarly Fick-like, whereas the Eh fixation range was quasiplanar. The latter Eh range was maintained through exquisite saccade/slow phase coordination, i.e., during each head movement, multiple anticipatory saccades drove the eye torsionally out of the planar range such that subsequent slow phases drove the eye back toward the fixation range. The Fick-like Hs constraint was maintained by the following strategies: first, during purely vertical/horizontal movements, the head rotated about constantly oriented axes that closely resembled physical Fick gimbals, i.e., about head-fixed horizontal axes and space-fixed vertical axes, respectively (although in 1 animal, the latter constraint was relaxed during repetitive horizontal movements, allowing for trajectory optimization). However, during large oblique movements, head orientation made transient but dramatic departures from the zero-torsion Fick surface, taking the shortest path between two torsionally eccentric fixation points on the surface. Moreover, in the pin-hole goggle task, the head-orientation range flattened significantly, suggesting a task-dependent default strategy similar to Listing's law. These and previous observations suggest two quasi-independent brain stem circuits: an oculomotor 2-D to 3-D transformation that coordinates anticipatory saccades with slow phases to uphold Listing's law, and a flexible "Fick operator" that selects head motor error; both nested within a dynamic gaze feedback loop.
The goal of this study was to identify and model the three-dimensional (3-D) geometric transformations required for accurate saccades to distant visual targets from arbitrary initial eye positions. In abstract 2-D models, target displacement in space, retinal error (RE), and saccade vectors are trivially interchangeable. However, in real 3-D space, RE is a nontrivial function of objective target displacement and 3-D eye position. To determine the physiological implications of this, a visuomotor "lookup table" was modeled by mapping the horizontal/vertical components of RE onto the corresponding vector components of eye displacement in Listing's plane. This provided the motor error (ME) command for a 3-D displacement-feedback loop. The output of this loop controlled an oculomotor plant that mechanically implemented the position-dependent saccade axis tilts required for Listing's law. This model correctly maintained Listing's law but was unable to correct torsional position deviations from Listing' s plane. Moreover, the model also generated systematic errors in saccade direction (as a function of eye position components orthogonal to RE), predicting errors in final gaze direction of up to 25 degrees in the oculomotor range. Plant modifications could not solve these problems, because the intrisic oculomotor input-output geometry forced a fixed visuomotor mapping to choose between either accuracy or Listing's law. This was reflected internally by a sensorimotor divergence between input-defined visual displacement signals (inherently 2-D and defined in reference to the eye) and output-defined motor displacement signals (inherently 3-D and defined in reference to the head). These problems were solved by rotating RE by estimated 3-D eye position (i.e., a reference frame transformation), inputting the result into a 2-D-to-3-D "Listing's law operator," and then finally subtracting initial 3-D eye position to yield the correct ME. This model was accurate and upheld Listing's law from all initial positions. Moreover, it suggested specific experiments to invasively distinguish visual and motor displacement codes, predicting a systematic position dependence in the directional tuning of RE versus a fixed-vector tuning in ME. We conclude that visual and motor displacement spaces are geometrically distinct such that a fixed visual-motor mapping will produce systematic and measurable behavioral errors. To avoid these errors, the brain would need to implement both a 3-D position-dependent reference frame transformation and nontrivial 2-D-to-3-D transformation. Furthermore, our simulations provide new experimental paradigms to invasively identify the physiological progression of these spatial transformations by reexamining the position-dependent geometry of displacement code directions in the superior colliculus, cerebellum, and various cortical visuomotor areas.
Traveling waves of neural activity are frequently observed to occur in concert with the presentation of a sensory stimulus or the execution of a movement. Although such waves have been studied for decades, little is known about their function. Here we show that traveling waves in the primate extrastriate visual cortex provide a means of integrating sensory and motor signals. Specifically, we describe a traveling wave of local field potential (LFP) activity in cortical area V4 of macaque monkeys that is triggered by the execution of saccadic eye movements. These waves sweep across the V4 retinotopic map, following a consistent path from the foveal to the peripheral representations of space; their amplitudes correlate with the direction and size of each saccade. Moreover, these waves are associated with a reorganization of the postsaccadic neuronal firing patterns, which follow a similar retinotopic progression, potentially prioritizing the processing of behaviorally relevant stimuli.
1. Orienting movements, consisting of coordinated eye and head displacements, direct the visual axis to the source of a sensory stimulus. A recent hypothesis suggests that the CNS may control gaze position (gaze = eye-relative-to-space = eye-relative-to-head + head-relative-to-space) by the use of a feedback circuit wherein an internally derived representation of gaze motor error drives both eye and head premotor circuits. In this paper we examine the effect of behavioral task on the individual and summed trajectories of horizontal eye- and head-orienting movements to gain more insight into how the eyes and head are coupled and controlled in different behavioral situations. 2. Cats whose heads were either restrained (head-fixed) or unrestrained (head-free) were trained to make orienting movements of any desired amplitude in a simple cat-and-mouse game we call the barrier paradigm. A rectangular opaque barrier was placed in front of the hungry animal who either oriented to a food target that was visible to one side of the barrier or oriented to a location on an edge of the barrier where it predicted the target would reappear from behind the barrier. 3. The dynamics (e.g., maximum velocity) and duration of eye- and head-orienting movements were affected by the task. Saccadic eye movements (head-fixed) elicited by the visible target attained greater velocity and had shorter durations than comparable amplitude saccades directed toward the predicted target. A similar observation has been made in human and monkey. In addition, when the head was unrestrained both the eye and head movements (and therefore gaze movements) were faster and shorter in the visible- compared with the predicted-target conditions. Nevertheless, the relative contributions of the eye and head to the overall gaze displacement remained task independent: i.e., the distance traveled by the eye and head movements was determined by the size of the gaze shift only. This relationship was maintained because the velocities of the eye and head movements covaried in the different behavioral situations. Gaze-velocity profiles also had characteristic shapes that were dependent on task. In the predicted-target condition these profiles tended to have flattened peaks, whereas when the target was visible the peaks were sharper. 4. Presentation of a visual cue (e.g., reappearance of food target) immediately before (less than 50 ms) the onset of a gaze shift to a predicted target triggered a midflight increase in first the eye- and, after approximately 20 ms, the head-movement velocity.(ABSTRACT TRUNCATED AT 400 WORDS)
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.