Visual cortical areas are richly but selectively connected by “patchy” projections. We characterized these connections physiologically with cross-correlograms (CCHs), calculated for neuron pairs or small groups located one each in visual areas 17 and 18 of the cat. The CCHs were then compared to the visuotopic and orientation match of the neurons' receptive fields (RFs).For both spontaneous and visually driven activity, most non-flat correlograms were centered; i.e. the most likely temporal relationship between spikes in the two areas is a synchronous one. Although spikes are most likely to occur simultaneously, area 17 spikes may occur before area 18 or vice versa, giving the cross-correlogram peak a finite width (temporal dispersion). Cross-correlograms fell into one of three groups according to their full-width at half peak height: 1–8 ms (modal width, 3 ms), 15–65 ms (modal width 30 ms), or 100–1000 ms (modal width 400 ms). These classificatory groups are nonoverlapping; the three types of coupling appeared singly and in combination.Neurons whose receptive fields (RFs) are nonoverlapping or cross-oriented may yet be coupled, but the coupling is more likely to be the broadest type of coupling than the medium-dispersed type. The sharpest type of coupling is found exclusively between neurons with at least partially overlapping RFs and mostly between neurons whose stimulus orientation preferences matched to within 22.5 deg. The maximum spatial dispersion observed in the RFs of coupled neurons compares well with the maximum divergence seen anatomically in the A18/A17 projection system.We suggest three different mechanisms to produce each of the three different degrees of observed spatial and temporal coherence. All mechanisms use common input of cortical origin. For medium and broad coupling, this common input arises from cell assemblies split between both sides of the 17/18 projection system, but acting synchronously. Such distributed common-input cell assemblies are a means of overcoming sparse connectivity and achieving synaptic transmission in the pyramidal network.
At a descriptive level, prehension movements can be partitioned into three components ensuring, respectively, the transport of the arm to the vicinity of the target, the orientation of the hand according to object tilt, and the grasp itself. Several authors have suggested that this analytic description may be an operational principle for the organization of the motor system. This hypothesis, called "visuomotor channels hypothesis," is in particular supported by experiments showing a parallelism between the reach and grasp components of prehension movements. The purpose of the present study was to determine whether or not the generalization of the visuomotor channels hypothesis, from its initial form, restricted to the grasp and transport components, to its actual form, including the reach orientation and grasp components, may be well founded. Six subjects were required to reach and grasp cylindrical objects presented at a given location, with different orientations. During the movements, object orientation was either kept constant (unperturbed trials) or modified at movement onset (perturbed trials). Results showed that both wrist path (sequence of positions that the hand follows in space), and wrist trajectory (time sequence of the successive positions of the hand) were strongly affected by object orientation and by the occurrence of perturbations. These observations suggested strongly that arm transport and hand orientation were neither planned nor controlled independently. The significant linear regressions observed, with respect to the time, between arm displacement (integral of the magnitude of the velocity vector) and forearm rotation also supported this view. Interestingly, hand orientation was not implemented at only the distal level, demonstrating that all the redundant degrees of freedom available were used by the motor system to achieve the task. The final configuration reached by the arm was very stable for a given final orientation of the object to grasp. In particular, when object tilt was suddenly modified at movement onset, the correction brought the upper limb into the same posture as that obtained when the object was initially presented along the final orientation reached after perturbation. Taken together, the results described in the present study suggest that arm transport and hand orientation do not constitute independent visuomotor channels. They also further suggest that prehension movements are programmed, from an initial configuration, to reach smoothly a final posture that corresponds to a given "location and orientation" as a whole.
1. A fundamental question about motor control is related to the nature of the representations used by the nervous system to program the movement. Theoretically, arm displacement can be encoded either in task (extrinsic) or in joint (intrinsic) space. 2. The present study investigated the organization of complex movements consisting of reaching and grasping a cylindrical object presented along different orientations in space. In some trials, object orientation was suddenly modified at movement onset. 3. At a static level, the final limb angles were highly predictable despite the wide range of possible postures allowed by articular redundancy. Moreover, when object orientation was unexpectedly modified at movement onset, the final angular configuration of the limb was identical to that obtained when the object was initially presented along the orientation reached after the perturbation. 4. At a dynamical level, a generalized synergy was observed, and tight correlations were noted between all joint angles implicated in the movement with the exception of elbow flexion. For this joint angle, which did not vary monotonically, strong partial correlations were however observed before and after movement reversal. 5. These results suggest that natural movements are mostly carried out in joint space by postural transitions.
In this study, the use of color and location as stimulus attributes manipulated during a simple action was aimed at comparing how dorsal (location) and ventral (color) features are integrated in action and the timing of their processing. Eighteen subjects were presented with a green dot on a computer screen, which they were required to point at and touch. In 20% of the trials, the location or the color of the target was altered at the onset of movement to this stimulus, requiring the participant to modify the initially programmed response according to specific motor instructions. In the 'location-go' group, the target changed in location and participants were instructed to reach the displaced stimulus by correcting their ongoing movement. In the 'location-stop' and 'color-stop' groups, subjects were instructed to interrupt their movement when the target changed location or color, respectively. Results showed that the latency of the first responses to the perturbation clearly depended on the stimulus attribute and not on the motor instruction tested: the response to color change was obtained about 80 ms later than both conditions involving location change. It is concluded that: (1) color processing is slower than location processing, and (2) the first reactions to the location change occur after the same delay irrespective of the response required from the subject.
Single cell activity was recorded from the monkey caudate nucleus. The animal had to execute motor and oculomotor sequences based on memorized information. In each trial, the monkey had to remember the order of illumination of three fixed spatial targets. After a delay, the animal had to press the targets in the same sequence. The "task-related" cells were activated by onset of the targets and on execution of saccades or arm movements. In a majority of cells, activation did not depend only on the retinal position of the stimuli or on the spatial parameters of gaze and arm movements, but was contingent on the particular sequence in which the targets were illuminated or the movements were performed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.