Recent models of movement generation in motor cortex have sought to explain neural activity not as a function of movement parameters, known as representational models, but as a dynamical system acting at the level of the population. Despite evidence supporting this framework, the evaluation of representational models and their integration with dynamical systems is incomplete in the literature. Using a representational velocity-tuning based simulation of center-out reaching, we show that incorporating variable latency offsets between neural activity and kinematics is sufficient to generate rotational dynamics at the level of neural populations, a phenomenon observed in motor cortex. However, we developed a covariance-matched permutation test (CMPT) that reassigns neural data between task conditions independently for each neuron while maintaining overall neuron-to-neuron relationships, revealing that rotations based on the representational model did not uniquely depend on the underlying condition structure. In contrast, rotations based on either a dynamical model or motor cortex data depend on this relationship, providing evidence that the dynamical model more readily explains motor cortex activity. Importantly, implementing a recurrent neural network we demonstrate that both representational tuning properties and rotational dynamics emerge, providing evidence that a dynamical system can reproduce previous findings of representational tuning. Finally, using motor cortex data in combination with the CMPT, we show that results based on small numbers of neurons or conditions should be interpreted cautiously, potentially informing future experimental design. Together, our findings reinforce the view that representational models lack the explanatory power to describe complex aspects of single neuron and population level activity.
The functional communication of neurons in cortical networks underlies higher cognitive processes. Yet, little is known about the organization of the single neuron network or its relationship to the synchronization processes that are essential for its formation. Here, we show that the functional single neuron network of three fronto-parietal areas during active behavior of macaque monkeys is highly complex. The network was closely connected (small-world) and consisted of functional modules spanning these areas. Surprisingly, the importance of different neurons to the network was highly heterogeneous with a small number of neurons contributing strongly to the network function (hubs), which were in turn strongly inter-connected (rich-club). Examination of the network synchronization revealed that the identified rich-club consisted of neurons that were synchronized in the beta or low frequency range, whereas other neurons were mostly non-oscillatory synchronized. Therefore, oscillatory synchrony may be a central communication mechanism for highly organized functional spiking networks.DOI: http://dx.doi.org/10.7554/eLife.15719.001
One of the primary ways we interact with the world is using our hands. In macaques, the circuit spanning the anterior intraparietal area, the hand area of the ventral premotor cortex, and the primary motor cortex is necessary for transforming visual information into grasping movements. However, no comprehensive model exists that links all steps of processing from vision to action. We hypothesized that a recurrent neural network mimicking the modular structure of the anatomical circuit and trained to use visual features of objects to generate the required muscle dynamics used by primates to grasp objects would give insight into the computations of the grasping circuit. Internal activity of modular networks trained with these constraints strongly resembled neural activity recorded from the grasping circuit during grasping and paralleled the similarities between brain regions. Network activity during the different phases of the task could be explained by linear dynamics for maintaining a distributed movement plan across the network in the absence of visual stimulus and then generating the required muscle kinematics based on these initial conditions in a module-specific way. These modular models also outperformed alternative models at explaining neural data, despite the absence of neural data during training, suggesting that the inputs, outputs, and architectural constraints imposed were sufficient for recapitulating processing in the grasping circuit. Finally, targeted lesioning of modules produced deficits similar to those observed in lesion studies of the grasping circuit, providing a potential model for how brain regions may coordinate during the visually guided grasping of objects.
We have recently shown that subjects can appropriately modulate their rapid motor responses (traditionally termed reflexes) to move their hand to a spatial target when the target is displayed ~2 s before a mechanical perturbation (Pruszynski et al. in J Neurophysiol 100:224-238, 2008). The goal of this study was to investigate how quickly visual information can be used to modulate rapid motor responses to an impending mechanical perturbation. Following a 2 s to 10 ms target preview delay (PD), a perturbation either displaced the subject's hand into or out of the previewed target. We also included a condition, where the target appeared after perturbation onset (target PD = +90 ms). In all cases, subjects were instructed to react as quickly as possible to the perturbation by reaching into the displayed target. Our results indicate that subjects began to incorporate visual information into their rapid motor responses with PDs as small as 70 ms. Interestingly, subjects reacted faster when the target was presented ~150 ms before the perturbation than when they had 2 s to prepare a response. Using receiver operative characteristic (ROC) analysis, we examined modulation of muscle activity as a function of preview delay in three predefined epochs. No modulation was found in the short-latency epoch (R1; 20-45 ms). In contrast, both the long-latency (45-105 ms) and voluntary (120-180 ms) epochs were modulated at essentially the same time, 140 ms from visual presentation of the target to the beginning of each respective epoch.
Our voluntary grasping actions lie on a continuum between immediate action and waiting for the right moment, depending on the context. Therefore, studying grasping requires an investigation into how preparation time affects this process. Two macaque monkeys (; one male, one female) performed a grasping task with a short instruction followed by an immediate or delayed go cue (0-1300 ms) while we recorded in parallel from neurons in the grasp preparation relevant area F5 that is part of the ventral premotor cortex, and the anterior intraparietal area (AIP). Initial population dynamics followed a fixed trajectory in the neural state space unique to each grip type, reflecting unavoidable movement selection, then diverged depending on the delay, reaching unique states not achieved for immediately cued movements. Population activity in the AIP was less dynamic, whereas F5 activity continued to evolve throughout the delay. Interestingly, neuronal populations from both areas allowed for a readout tracking subjective anticipation of the go cue that predicted single-trial reaction time. However, the prediction of reaction time was better from F5 activity. Intriguingly, activity during movement initiation clustered into two trajectory groups, corresponding to movements that were either "as fast as possible" or withheld movements, demonstrating a widespread state shift in the frontoparietal grasping network when movements must be withheld. Our results reveal how dissociation between immediate and delay-specific preparatory activity, as well as differentiation between cortical areas, is possible through population-level analysis. Sometimes when we move, we consciously plan our movements. At other times, we move instantly, seemingly with no planning at all. Yet, it's unclear how preparation for movements along this spectrum of planned and seemingly unplanned movement differs in the brain. Two macaque monkeys made reach-to-grasp movements after varying amounts of preparation time while we recorded from the premotor and parietal cortex. We found that the initial response to a grasp instruction was specific to the required movement, but not to the preparation time, reflecting required movement selection. However, when more preparation time was given, neural activity achieved unique states that likely related to withholding movements and anticipation of movement, shedding light on the roles of the premotor and parietal cortex in grasp planning.
One of the primary ways we interact with the world is using our hands. In macaques, the circuit spanning the anterior intraparietal area, the hand area of the ventral premotor cortex, and the primary motor cortex is necessary for transforming visual information into grasping movements. We hypothesized that a recurrent neural network mimicking the multi-area structure of the anatomical circuit and using visual features to generate the required muscle dynamics to grasp objects would explain the neural and computational basis of the grasping circuit. Modular networks with object feature input and sparse inter-module connectivity outperformed other models at explaining neural data and the inter-area relationships present in the biological circuit, despite the absence of neural data during network training. Network dynamics were governed by simple rules, and targeted lesioning of modules produced deficits similar to those observed in lesion studies, providing a potential explanation for how grasping movements are generated.
2Preparing and executing grasping movements demands the coordination of sensory information across multiple scales. The position of an object, required hand shape, and which of our hands to extend must all be coordinated in parallel. The network formed by the macaque anterior intraparietal area (AIP) and hand area (F5) of the ventral premotor cortex is essential in the generation of grasping movements. Yet, the role of this circuit in hand selection is unclear. We recorded from 1342 single-and multi-units in AIP and F5 of two macaque monkeys (Macaca mulatta) during a delayed grasping task in which monkeys were instructed by a visual cue to perform power or precision grips on a handle presented in five different orientations with either the left or right hand, as instructed by an auditory tone. In AIP, intended hand use was only weakly represented during preparation, while hand use was robustly present in F5 during preparation. Interestingly, visual-centric handle orientation information dominated AIP, while F5 contained an additional body-centric frame during preparation and movement. Together, our results implicate F5 as a site of visuomotor transformation and advocate a strong transition between hand-invariant and hand-specific representations in this parieto-frontal circuit.
Preparing and executing grasping movements demands the coordination of sensory information across multiple scales. The position of an object, required hand shape, and which of our hands to extend must all be coordinated in parallel. The network formed by the macaque anterior intraparietal area (AIP) and hand area (F5) of the ventral premotor cortex is essential in the generation of grasping movements. Yet, the role of this circuit in hand selection is unclear. We recorded from 1342 single- and multi-units in AIP and F5 of two macaque monkeys (Macaca mulatta) during a delayed grasping task in which monkeys were instructed by a visual cue to perform power or precision grips on a handle presented in five different orientations with either the left or right hand, as instructed by an auditory tone. In AIP, intended hand use (left vs. right) was only weakly represented during preparation, while hand use was robustly present in F5 during preparation. Interestingly, visual-centric handle orientation information dominated AIP, while F5 contained an additional body-centric frame during preparation and movement. Together, our results implicate F5 as a site of visuo-motor transformation and advocate a strong transition between hand-independent and hand-dependent representations in this parieto-frontal circuit.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.