The present study compared the contribution of visual information of hand and target position to the online control of goal-directed arm movements. Their respective contributions were assessed by examining how human subjects reacted to a change of the position of either their seen hand or the visual target near the onset of the reaching movement. Subjects, seated head-fixed in a dark room, were instructed to look at and reach with a pointer towards visual targets located in the fronto-parallel plane at different distances to the right of the starting position. LEDs mounted on the tip of the pointer were used to provide true or erroneous visual feedback about hand position. In some trials, either the target or the pointer LED that signalled the actual hand position was shifted 4.5 cm to the left or to the right during the ocular saccade towards the target. Because of saccadic suppression, subjects did not perceive these displacements, which occurred near arm movement onset. The results showed that modifications of arm movement amplitude appeared, on average, 150 ms earlier and reached a greater extent (mean difference=2.7 cm) when there was a change of target position than when a change of the seen hand position occurred. These findings highlight the weight of target position information to the online control of arm movements. Visual information relative to hand position may be less contributive because proprioception also provides information about limb position.
Humans can remarkably adapt their motor behavior to novel environmental conditions, yet it remains unclear which factors enable us to transfer what we have learned with one limb to the other. Here we tested the hypothesis that interlimb transfer of sensorimotor adaptation is determined by environmental conditions but also by individual characteristics. We specifically examined the adaptation of unconstrained reaching movements to a novel Coriolis, velocity-dependent force field. Right-handed subjects sat at the center of a rotating platform and performed forward reaching movements with the upper limb toward flashed visual targets in prerotation, per-rotation (i.e., adaptation), and postrotation tests. Here only the dominant arm was used during adaptation and interlimb transfer was assessed by comparing performance of the nondominant arm before and after dominant-arm adaptation. Vision and no-vision conditions did not significantly influence interlimb transfer of trajectory adaptation, which on average was significant but limited. We uncovered a substantial heterogeneity of interlimb transfer across subjects and found that interlimb transfer can be qualitatively and quantitatively predicted for each healthy young individual. A classifier showed that in our study, interlimb transfer could be predicted based on the subject's task performance, most notably motor variability during learning, and his or her laterality quotient. Positive correlations suggested that variability of motor performance and lateralization of arm movement control facilitate interlimb transfer. We further show that these individual characteristics can predict the presence and the magnitude of interlimb transfer of left-handers. Overall, this study suggests that individual characteristics shape the way the nervous system can generalize motor learning.
Not just detecting but also predicting impairment of a car driver's operational state is a challenge. This study aims to determine whether the standard sources of information used to detect drowsiness can also be used to predict when a given drowsiness level will be reached. Moreover, we explore whether adding data such as driving time and participant information improves the accuracy of detection and prediction of drowsiness. Twenty-one participants drove a car simulator for 110min under conditions optimized to induce drowsiness. We measured physiological and behavioral indicators such as heart rate and variability, respiration rate, head and eyelid movements (blink duration, frequency and PERCLOS) and recorded driving behavior such as time-to-lane-crossing, speed, steering wheel angle, position on the lane. Different combinations of this information were tested against the real state of the driver, namely the ground truth, as defined from video recordings via the Trained Observer Rating. Two models using artificial neural networks were developed, one to detect the degree of drowsiness every minute, and the other to predict every minute the time required to reach a particular drowsiness level (moderately drowsy). The best performance in both detection and prediction is obtained with behavioral indicators and additional information. The model can detect the drowsiness level with a mean square error of 0.22 and can predict when a given drowsiness level will be reached with a mean square error of 4.18min. This study shows that, on a controlled and very monotonous environment conducive to drowsiness in a driving simulator, the dynamics of driver impairment can be predicted.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.