Cocontraction (the simultaneous activation of antagonist muscles around a joint) provides the nervous system with a way to adapt the mechanical properties of the limb to changing task requirements—both in statics and during movement. However, relatively little is known about the conditions under which the motor system modulates limb impedance through cocontraction. The goal of this study was to test for a possible relationship between cocontraction and movement accuracy in multi-joint limb movements. The electromyographic activity of seven single- and double-joint shoulder and elbow muscles was recorded using surface electrodes while subjects performed a pointing task in a horizontal plane to targets that varied randomly in size. Movement speed was controlled by providing subjects with feedback on a trial-to-trial basis. Measures of cocontraction were estimated both during movement and during a 200-ms window immediately following movement end. We observed an inverse relationship between target size and cocontraction: as target size was reduced, cocontraction activity increased. In addition, trajectory variability decreased and endpoint accuracy improved. This suggests that, although energetically expensive, cocontraction may be a strategy used by the motor system to facilitate multi-joint arm movement accuracy. We also observed a general trend for cocontraction levels to decrease over time, supporting the idea that cocontraction and associated limb stiffness are reduced over the course of practice.
Learning complex motor behaviors like riding a bicycle or swinging a golf club is based on acquiring neural representations of the mechanical requirements of movement (e.g., coordinating muscle forces to control the club). Here we provide evidence that mechanisms matching observation and action facilitate motor learning. Subjects who observed a video depicting another person learning to reach in a novel mechanical environment (imposed by a robot arm) performed better when later tested in the same environment than subjects who observed similar movements but no learning; moreover, subjects who observed learning of a different environment performed worse. We show that this effect is not based on conscious strategies but instead depends on the implicit engagement of neural systems for movement planning and control.
Three experiments are reported on the influence of different timing relations on the McGurk effect. In the first experiment, it is shown that strict temporal synchrony between auditory and visual speech stimuli is not required for the McGurk effect. Subjects were strongly influenced by the visual stimuli when the auditory stimuli lagged the visual stimuli by as much as 180 msec. In addition, a stronger McGurk effect was found when the visual and auditory vowels matched. In the second experiment, we paired auditory and visual speech stimuli produced under different speaking conditions (fast, normal, clear). The results showed that the manipulations in both the visual and auditory speaking conditions independently influenced perception. In addition, there was a small but reliable tendency for the better matched stimuli to elicit more McGurk responses than unmatched conditions. In the third experiment, we combined auditory and visual stimuli produced under different speaking conditions (fast, clear) and delayed the acoustics with respect to the visual stimuli. The subjects showed the same pattern of results as in the second experiment. Finally, the delay did not cause different patterns of results for the different audiovisual speaking style combinations. The results suggest that perceivers may be sensitive to the concordance of the time-varying aspects of speech but they do not require temporal coincidence of that information.When the face moves during speech production it provides information about the place of articulation as well as the class of phoneme that is produced. Evidence from studies of lipreading as well as studies of speech in noise (e.g., Sumby & Pollack, 1954) suggest that perceivers can gain significant amounts of information about the speech target through the visual channel. How this information is combined with speech acoustics to form a single percept, however, is not clear. One useful approach to studying audiovisual integration in speech is to dub various auditory stimuli onto different visual speech stimuli. When a discrepancy exists between the information from the two modalities, subjects fuse the visual and auditory information to form a new percept. For example, when the face articulates /gi/ and the auditory stimulus is /bi/, many subjects report hearing /di/.
Motor learning is dependent upon plasticity in motor areas of the brain, but does it occur in isolation, or does it also result in changes to sensory systems? We examined changes to somatosensory function that occur in conjunction with motor learning. We found that even after periods of training as brief as 10 min, sensed limb position was altered and the perceptual change persisted for 24 h. The perceptual change was reflected in subsequent movements; limb movements following learning deviated from the prelearning trajectory by an amount that was not different in magnitude and in the same direction as the perceptual shift. Crucially, the perceptual change was dependent upon motor learning. When the limb was displaced passively such that subjects experienced similar kinematics but without learning, no sensory change was observed. The findings indicate that motor learning affects not only motor areas of the brain but changes sensory function as well.
During multijoint limb movements such as reaching, rotational forces arise at one joint due to the motions of limb segments about other joints. We report the results of three experiments in which we assessed the extent to which control signals to muscles are adjusted to counteract these "interaction torques." Human subjects performed single- and multijoint pointing movements involving shoulder and elbow motion, and movement parameters related to the magnitude and direction of interaction torques were manipulated systematically. We examined electromyographic (EMG) activity of shoulder and elbow muscles and, specifically, the relationship between EMG activity and joint interaction torque. A first set of experiments examined single-joint movements. During both single-joint elbow (experiment 1) and shoulder (experiment 2) movements, phasic EMG activity was observed in muscles spanning the stationary joint (shoulder muscles in experiment 1 and elbow muscles in experiment 2). This muscle activity preceded movement and varied in amplitude with the magnitude of upcoming interaction torque (the load resulting from motion of the nonstationary limb segment). In a third experiment, subjects performed multijoint movements involving simultaneous motion at the shoulder and elbow. Movement amplitude and velocity at one joint were held constant, while the direction of movement about the other joint was varied. When the direction of elbow motion was varied (flexion vs. extension) and shoulder kinematics were held constant, EMG activity in shoulder muscles varied depending on the direction of elbow motion (and hence the sign of the interaction torque arising at the shoulder). Similarly, EMG activity in elbow muscles varied depending on the direction of shoulder motion for movements in which elbow kinematics were held constant. The results from all three experiments support the idea that central control signals to muscles are adjusted, in a predictive manner, to compensate for interaction torques-loads arising at one joint that depend on motion about other joints.
It has been proposed that the control signals underlying voluntary human arm movement have a "complex" nonmonotonic time-varying form, and a number of empirical findings have been offered in support of this idea. In this paper, we address three such findings using a model of two-joint arm motion based on the lambda version of the equilibrium-point hypothesis. The model includes six one- and two-joint muscles, reflexes, modeled control signals, muscle properties, and limb dynamics. First, we address the claim that "complex" equilibrium trajectories are required to account for nonmonotonic joint impedance patterns observed during multijoint movement. Using constant-rate shifts in the neurally specified equilibrium of the limb and constant cocontraction commands, we obtain patterns of predicted joint stiffness during simulated multijoint movements that match the nonmonotonic patterns reported empirically. We then use the algorithm proposed by Gomi and Kawato to compute a hypothetical equilibrium trajectory from simulated stiffness, viscosity, and limb kinematics. Like that reported by Gomi and Kawato, the resulting trajectory was nonmonotonic, first leading then lagging the position of the limb. Second, we address the claim that high levels of stiffness are required to generate rapid single-joint movements when simple equilibrium shifts are used. We compare empirical measurements of stiffness during rapid single-joint movements with the predicted stiffness of movements generated using constant-rate equilibrium shifts and constant cocontraction commands. Single-joint movements are simulated at a number of speeds, and the procedure used by Bennett to estimate stiffness is followed. We show that when the magnitude of the cocontraction command is scaled in proportion to movement speed, simulated joint stiffness varies with movement speed in a manner comparable with that reported by Bennett. Third, we address the related claim that nonmonotonic equilibrium shifts are required to generate rapid single-joint movements. Using constant-rate equilibrium shifts and constant cocontraction commands, rapid single-joint movements are simulated in the presence of external torques. We use the procedure reported by Latash and Gottlieb to compute hypothetical equilibrium trajectories from simulated torque and angle measurements during movement. As in Latash and Gottlieb, a nonmonotonic function is obtained even though the control signals used in the simulations are constant-rate changes in the equilibrium position of the limb. Differences between the "simple" equilibrium trajectory proposed in the present paper and those that are derived from the procedures used by Gomi and Kawato and Latash and Gottlieb arise from their use of simplified models of force generation.
The appearance of a novel visual stimulus generates a rapid stimulus-locked response (SLR) in the motor periphery within 100 ms of stimulus onset. Here, we recorded SLRs from an upper limb muscle while humans reached toward (pro-reach) or away (anti-reach) from a visual stimulus. The SLR on anti-reaches encoded the location of the visual stimulus rather than the movement goal. Further, SLR magnitude was attenuated when subjects reached away from rather than toward the visual stimulus. Remarkably, SLR magnitudes also correlated with reaction times on both pro-reaches and anti-reaches, but did so in opposite ways: larger SLRs preceded shorter latency pro-reaches but longer latency anti-reaches. Although converging evidence suggests that the SLR is relayed via a tectoreticulospinal pathway, our results show that task-related signals modulate visual signals feeding into this pathway. The SLR therefore provides a trial-by-trial window into how visual information is integrated with cognitive control in humans.
We recorded muscle activity from an upper limb muscle while human subjects reached towards peripheral targets. We tested the hypothesis that the transient visual response sweeps not only through the central nervous system, but also through the peripheral nervous system. Like the transient visual response in the central nervous system, stimulus-locked muscle responses (< 100 ms) were sensitive to stimulus contrast, and were temporally and spatially dissociable from voluntary orienting activity. Also, the arrival of visual responses reduced the variability of muscle activity by resetting the phase of ongoing low-frequency oscillations. This latter finding critically extends the emerging evidence that the feedforward visual sweep reduces neural variability via phase resetting. We conclude that, when sensory information is relevant to a particular effector, detailed information about the sensorimotor transformation, even from the earliest stages, is found in the peripheral nervous system.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.