Single-unit recordings have identified a region in the posterior parietal cortex (PPC) of the monkey that represents and updates visual space in a gaze-centered frame. Here, using event-related functional magnetic resonance imaging, we identified an analogous bilateral region in the human PPC that shows contralateral topography for memory-guided eye movements and arm movements. Furthermore, when eye movements reversed the remembered horizontal target location relative to the gaze fixation point, this PPC region exchanged activity across the two cortical lobules. This shows that the human PPC dynamically updates the spatial goals for action in a gaze-centered frame.
Most evidence that the brain uses Bayesian inference to integrate noisy sensory signals optimally has been obtained by showing that the noise levels in each modality separately can predict performance in combined conditions. Such a forward approach is difficult to implement when the various signals cannot be measured in isolation, as in spatial orientation, which involves the processing of visual, somatosensory, and vestibular cues. Instead, we applied an inverse probabilistic approach, based on optimal observer theory. Our goal was to investigate whether the perceptual differences found when probing two different states-body-in-space and head-in-space orientation-can be reconciled by a shared scheme using all available sensory signals. Using a psychometric approach, seven human subjects were tested on two orientation estimates at tilts Ͻ120°: perception of body tilt [subjective body tilt (SBT)] and perception of visual vertical [subjective visual vertical (SVV)]. In all subjects, the SBT was more accurate than the SVV, which showed substantial systematic errors for tilt angles beyond 60°. Variability increased with tilt angle in both tasks, but was consistently lower in the SVV. The sensory integration model fitted both datasets very nicely. A further experiment, in which supine subjects judged their head orientation relative to the body, independently confirmed the predicted head-on-body noise by the model. Model predictions based on the derived noise properties from the various modalities were also consistent with previously published deficits in vestibular and somatosensory patients. We conclude that Bayesian computations can account for the typical differences in spatial orientation judgments associated with different task requirements.
To plan a reaching movement, the brain must integrate information about the location of the target with information about the limb selected for the reach. Here, we applied rapid event-related 3-T fMRI to investigate this process in human subjects (n = 16) preparing a reach following two successive visual instruction cues. One cue instructed which arm to use; the other cue instructed the location of the reach target. We hypothesized that regions involved in the integration of target and effector information should not only respond to each of the two instruction cues, but should respond more strongly to the second cue due to the added integrative processing to establish the reach plan. We found bilateral regions in the posterior parietal cortex, the premotor cortex, the medial frontal cortex, and the insular cortex to be involved in target-arm integration, as well as the left dorsolateral prefrontal cortex and an area in the right lateral occipital sulcus to respond in this manner. We further determined the functional properties of these regions in terms of spatial and effector specificity. This showed that the posterior parietal cortex and the dorsal premotor cortex specify both the spatial location of a target and the effector selected for the response. We therefore conclude that these regions are selectively engaged in the neural computations for reach planning, consistent with the results from physiological studies in nonhuman primates.
Eye-hand coordination is complex because it involves the visual guidance of both the eyes and hands, while simultaneously using eye movements to optimize vision. Since only hand motion directly affects the external world, eye movements are the slave in this system. This eye-hand visuomotor system incorporates closed-loop visual feedback but here we focus on early feedforward mechanisms that allow primates to make spatially accurate reaches. First, we consider how the parietal cortex might store and update gaze-centered representations of reach targets during a sequence of gaze shifts and fixations. Recent evidence suggests that such representations might be compared with hand position signals within this early gaze-centered frame. However, the resulting motor error commands cannot be treated independently of their frame of origin or the frame of their destined motor command. Behavioral experiments show that the brain deals with the nonlinear aspects of such reference frame transformations, and incorporates internal models of the complex linkage geometry of the eye-head-shoulder system. These transformations are modeled as a series of vector displacement commands, rotated by eye and head orientation, and implemented between parietal and frontal cortex through efficient parallel neuronal architectures. Finally, we consider how this reach system might interact with the visually guided grasp system through both parallel and coordinated neural algorithms.
Recently, using event-related functional MRI (fMRI), we located a bilateral region in the human posterior parietal cortex (retIPS) that topographically represents and updates targets for saccades and pointing movements in eye-centered coordinates. To generate movements, this spatial information must be integrated with the selected effector. We now tested whether the activation in retIPS is dependent on the hand selected. Using 4-T fMRI, we compared the activation produced by movements, using either eyes or the left or right hand, to targets presented either leftward or rightward of central fixation. The majority of the regions activated during saccades were also activated during pointing movements, including occipital, posterior parietal, and premotor cortex. The topographic retIPS region was activated more strongly for saccades than for pointing. The activation associated with pointing was significantly greater when pointing with the unseen hand to targets ipsilateral to the hand. For example, although there was activation in the left retIPS when pointing to targets on the right with the left hand, the activation was significantly greater when using the right hand. The mirror symmetric effect was observed in the right retIPS. Similar hand preferences were observed in a nearby anterior occipital region. This effector specificity is consistent with previous clinical and behavioral studies showing that each hand is more effective in directing movements to targets in ipsilateral visual space. We conclude that not only do these regions code target location, but they also appear to integrate target selection with effector selection.
De Vrijer M, Medendorp WP, Van Gisbergen JA. Shared computational mechanism for tilt compensation accounts for biased verticality percepts in motion and pattern vision. J Neurophysiol 99: 915-930, 2008. First published December 19, 2007 doi:10.1152/jn.00921.2007. To determine the direction of object motion in external space, the brain must combine retinal motion signals and information about the orientation of the eyes in space. We assessed the accuracy of this process in eight laterally tilted subjects who aligned the motion direction of a random-dot pattern (30% coherence, moving at 6°/s) with their perceived direction of gravity (motion vertical) in otherwise complete darkness. For comparison, we also tested the ability to align an adjustable visual line (12°diameter) to the direction of gravity (line vertical). For small head tilts (Ͻ40°), systematic errors in either task were almost negligible. In contrast, tilts Ͼ60°revealed a pattern of large systematic errors (often Ͼ30°) that was virtually identical in both tasks. Regression analysis confirmed that mean errors in the two tasks were closely related, with slopes close to 1.0 and correlations Ͼ0.89. Control experiments ruled out that motion settings were based on processing of individual single-dot paths. We conclude that the conversion of both motion direction and line orientation on the retina into a spatial frame of reference involves a shared computational strategy. Simulations with two spatial-orientation models suggest that the pattern of systematic errors may be the downside of an optimal strategy for dealing with imperfections in the tilt signal that is implemented before the reference-frame transformation.
Much of the central nervous system is involved in visuomotor transformations for goal-directed gaze and reach movements. These transformations are often described in terms of stimulus location, gaze fixation, and reach endpoints, as viewed through the lens of translational geometry. Here, we argue that the intrinsic (primarily rotational) 3-D geometry of the eye-head-reach systems determines the spatial relationship between extrinsic goals and effector commands, and therefore the required transformations. This approach provides a common theoretical framework for understanding both gaze and reach control. Combined with an assessment of the behavioral, neurophysiological, imaging, and neuropsychological literature, this framework leads us to conclude that (a) the internal representation and updating of visual goals are dominated by gaze-centered mechanisms, but (b) these representations must then be transformed as a function of eye and head orientation signals into effector-specific 3-D movement commands.
We applied magnetoencephalography (MEG) to record oscillatory brain activity from human subjects engaged in planning a double-step saccade. In the experiments, subjects (n = 8) remembered the locations of 2 sequentially flashed targets (each followed by a 2-s delay), presented in either the left or right visual hemifield, and then made saccades to the 2 locations in sequence. We examined changes in spectral power in relation to target location (left or right) and memory load (one or two targets), excluding error trials based on concurrent eye tracking. During the delay period following the first target, power in the alpha (8-12 Hz) and beta (13-25 Hz) bands was significantly suppressed in the hemisphere contralateral to the target. When the second target was presented, there was a further suppression in the alpha- and beta-band power over both hemispheres. In this period, the same sensors also showed contralateral power enhancements in the gamma band (60-90 Hz), most significantly prior to the initiation of the saccades. Adaptive spatial filtering techniques localized the neural sources of the directionally selective power changes in parieto-occipital areas. These results provide further support for a topographic organization for delayed saccades in human parietal and occipital cortex.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.