Picking up a cup requires transporting the arm to the cup (transport component) and preshaping the hand appropriately to grasp the handle (grip component). Here, we used functional magnetic resonance imaging to examine the human neural substrates of the transport component and its relationship with the grip component. Participants were shown three-dimensional objects placed either at a near location, adjacent to the hand, or at a far location, within reach but not adjacent to the hand. Participants performed three tasks at each location as follows: (1) touching the object with the knuckles of the right hand; (2) grasping the object with the right hand; or (3) passively viewing the object. The transport component was manipulated by positioning the object in the far versus the near location. The grip component was manipulated by asking participants to grasp the object versus touching it. For the first time, we have identified the neural substrates of the transport component, which include the superior parieto-occipital cortex and the rostral superior parietal lobule. Consistent with past studies, we found specialization for the grip component in bilateral anterior intraparietal sulcus and left ventral premotor cortex; now, however, we also find activity for the grasp even when no transport is involved. In addition to finding areas specialized for the transport and grip components in parietal cortex, we found an integration of the two components in dorsal premotor cortex and supplementary motor areas, two regions that may be important for the coordination of reach and grasp.
The location of a remembered reach target can be encoded in egocentric and/or allocentric reference frames. Cortical mechanisms for egocentric reach are relatively well described, but the corresponding allocentric representations are essentially unknown. Here, we used an event-related fMRI design to distinguish human brain areas involved in these two types of representation. Our paradigm consisted of three tasks with identical stimulus display but different instructions: egocentric reach (remember absolute target location), allocentric reach (remember target location relative to a visual landmark), and a nonspatial control, color report (report color of target). During the delay phase (when only target location was specified), the egocentric and allocentric tasks elicited widely overlapping regions of cortical activity (relative to the control), but with higher activation in parietofrontal cortex for egocentric task and higher activation in early visual cortex for allocentric tasks. In addition, egocentric directional selectivity (target relative to gaze) was observed in the superior occipital gyrus and the inferior occipital gyrus, whereas allocentric directional selectivity (target relative to a visual landmark) was observed in the inferior temporal gyrus and inferior occipital gyrus. During the response phase (after movement direction had been specified either by reappearance of the visual landmark or a pro-/anti-reach instruction), the parietofrontal network resumed egocentric directional selectivity, showing higher activation for contralateral than ipsilateral reaches. These results show that allocentric and egocentric reach mechanisms use partially overlapping but different cortical substrates and that directional specification is different for target memory versus reach response.
Reach-to-grasp actions require coordination of different segments of the upper limbs. Previous studies have examined the neural substrates of arm transport and hand grip components of such actions; however, a third component has been largely neglected: the orientation of the wrist and hand appropriately for the object. Here we used functional magnetic resonance imaging adaptation (fMRA) to investigate human brain areas involved in processing hand orientation during grasping movements. Participants used the dominant right hand to grasp a rod with the four fingers opposing the thumb or to reach and touch the rod with the knuckles without visual feedback. In a control condition, participants passively viewed the rod. Trials in a slow event-related design consisted of two sequential stimuli in which the rod orientation changed (requiring a change in wrist posture while grasping but not reaching or looking) or remained the same. We found reduced activation, that is, adaptation, in superior parieto-occipital cortex (SPOC) when the object was repeatedly grasped with the same orientation. In contrast, there was no adaptation when reaching or looking at an object in the same orientation, suggesting that hand orientation, rather than object orientation, was the critical factor. These results agree with recent neurophysiological research showing that a parieto-occipital area of macaque (V6A) is modulated by hand orientation during reach-to-grasp movements. We suggest that the human dorsomedial stream, like that in the macaque, plays a key role in processing hand orientation in reach-to-grasp movements.
Behavioral and neuropsychological research suggests that delayed actions rely on different neural substrates than immediate actions; however, the specific brain areas implicated in the two types of actions remain unknown. We used functional magnetic resonance imaging (fMRI) to measure human brain activation during delayed grasping and reaching. Specifically, we examined activation during visual stimulation and action execution separated by a 18-s delay interval in which subjects had to remember an intended action toward the remembered object. The long delay interval enabled us to unambiguously distinguish visual, memory-related, and action responses. Most strikingly, we observed reactivation of the lateral occipital complex (LOC), a ventral-stream area implicated in visual object recognition, and early visual cortex (EVC) at the time of action. Importantly this reactivation was observed even though participants remained in complete darkness with no visual stimulation at the time of the action. Moreover, within EVC, higher activation was observed for grasping than reaching during both vision and action execution. Areas in the dorsal visual stream were activated during action execution as expected and, for some, also during vision. Several areas, including the anterior intraparietal sulcus (aIPS), dorsal premotor cortex (PMd), primary motor cortex (M1) and the supplementary motor area (SMA), showed sustained activation during the delay phase. We propose that during delayed actions, dorsal-stream areas plan and maintain coarse action goals; however, at the time of execution, motor programming requires re-recruitment of detailed visual information about the object through reactivation of (1) ventral-stream areas involved in object perception and (2) early visual areas that contain richly detailed visual representations, particularly for grasping.
The cortical mechanisms for reach have been studied extensively, but directionally selective mechanisms for visuospatial target memory, movement planning, and movement execution have not been clearly differentiated in the human. We used an event-related fMRI design with a visuospatial memory delay, followed by a pro-/anti-reach instruction, a planning delay, and finally a "go" instruction for movement. This sequence yielded temporally separable preparatory responses that expanded from modest parieto-frontal activation for visual target memory to broad occipital-parietal-frontal activation during planning and execution. Using the pro/anti instruction to differentiate visual and motor directional selectivity during planning, we found that one occipital area showed contralateral "visual" selectivity, whereas a broad constellation of left hemisphere occipital, parietal, and frontal areas showed contralateral "movement" selectivity. Temporal analysis of these areas through the entire memory-planning sequence revealed early visual selectivity in most areas, followed by movement selectivity in most areas, with all areas showing a stereotypical visuo-movement transition. Cross-correlation of these spatial parameters through time revealed separate spatiotemporally correlated modules for visual input, motor output, and visuo-movement transformations that spanned occipital, parietal, and frontal cortex. These results demonstrate a highly distributed occipital-parietal-frontal reach network involved in the transformation of retrospective sensory information into prospective movement plans.
Caudal area PE (PEc) of the macaque posterior parietal cortex has been shown to be a crucial node in visuomotor coordination during reaching. The present study was aimed at studying visual and somatosensory organization of this cortical area. Visual stimulations activated 53% of PEc neurons. The overwhelming majority (89%) of these visual cells were best activated by a dark stimulus on a lighter background. Somatosensory stimulations activated 56% of PEc neurons: most were joint neurons (73%); a minority (24%) showed tactile receptive fields, most of them located on the arms. Area PEc has not a clear retinotopy or somatotopy. Among the cells tested for both somatosensory and visual sensitivity, 22% were bimodal, 25% unimodal somatosensory, 34% unimodal visual, and 19% were insensitive to either stimulation. No clear clustering of the different classes of sensory neurons was observed. Visual and somatosensory receptive fields of bimodal cells were not in register. The damage in the human brain of the likely homologous of macaque PEc produces deficits in locomotion and in whole-body interaction with the visual environment. Present data show that macaque PEc has sensory properties and a functional organization in line with the view of an involvement of this area in those processes.
Grasping behaviors require the selection of grasp-relevant object dimensions, independent of overall object size. Previous neuroimaging studies found that the intraparietal cortex processes object size, but it is unknown whether the graspable dimension (i.e., grasp axis between selected points on the object) or the overall size of objects triggers activation in that region. We used functional magnetic resonance imaging adaptation to investigate human brain areas involved in processing the grasp-relevant dimension of real 3-dimensional objects in grasping and viewing tasks. Trials consisted of 2 sequential stimuli in which the object's grasp-relevant dimension, its global size, or both were novel or repeated. We found that calcarine and extrastriate visual areas adapted to object size regardless of the grasp-relevant dimension during viewing tasks. In contrast, the superior parietal occipital cortex (SPOC) and lateral occipital complex of the left hemisphere adapted to the grasp-relevant dimension regardless of object size and task. Finally, the dorsal premotor cortex adapted to the grasp-relevant dimension in grasping, but not in viewing, tasks, suggesting that motor processing was complete at this stage. Taken together, our results provide a complete cortical circuit for progressive transformation of general object properties into grasp-related responses.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.