Accurate control of human limbs involves both feedforward and feedback signals. For prosthetic arms, feedforward control is commonly accomplished by recording myoelectric signals from the residual limb to predict the user’s intent, but augmented feedback signals are not explicitly provided in commercial devices. Previous studies have demonstrated inconsistent results when artificial feedback was provided in the presence of vision; some studies showed benefits, while others did not. We hypothesized that negligible benefits in past studies may have been due to artificial feedback with low precision compared to vision, which results in heavy reliance on vision during reaching tasks. Furthermore, we anticipated more reliable benefits from artificial feedback when providing information that vision estimates with high uncertainty (e.g. joint speed). In this study, we test an artificial sensory feedback system providing joint speed information and how it impacts performance and adaptation during a hybrid positional-and-myoelectric ballistic reaching task. We found that overall reaching errors were reduced after perturbed control, but did not significantly improve steady-state reaches. Furthermore, we found that feedback about the joint speed of the myoelectric prosthesis control improved the adaptation rate of biological limb movements, which may have resulted from high prosthesis control noise and strategic overreaching with the positional control and underreaching with the myoelectric control. These results provide insights into the relevant factors influencing the improvements conferred by artificial sensory feedback.
Although partial-hand amputees largely retain the ability to use their wrist, it is difficult to preserve wrist motion while using a myoelectric partial-hand prosthesis without severely impacting control performance. Electromyogram (EMG) pattern recognition is a well-studied control method; however, EMG from wrist motion can obscure myoelectric finger control signals. Thus, to accommodate wrist motion and to provide high classification accuracy and minimize system latency, we developed a training protocol and a classifier that switches between long and short EMG analysis window lengths. Seventeen non-amputee and two partial-hand amputee subjects participated in a study to determine the effects of including EMG from different arm and hand locations during static and/or dynamic wrist motion in the classifier training data. We evaluated several real-time classification techniques to determine which control scheme yielded the highest performance in virtual real-time tasks using a three-way ANOVA. We found significant interaction between analysis window length and the number of grasps available. Including static and dynamic wrist motion and intrinsic hand muscle EMG with extrinsic muscle EMG significantly reduced pattern recognition classification error by 35%. Classification delay or majority voting techniques significantly improved real-time task completion rates (17%), selection (23%), and completion (11%) times, and selection attempts (15%) for non-amputee subjects, and the dual window classifier significantly reduced the time (8%) and average number of attempts required to complete grasp selections (14%) made in various wrist positions. Amputee subjects demonstrated improved task timeout rates, and made fewer grasp selection attempts, with classification delay or majority voting techniques. Thus, the proposed techniques show promise for improving control of partial-hand prostheses and more effectively restoring function to individuals using these devices.
Sensory feedback is critical in fine motor control, learning, and adaptation. However, robotic prosthetic limbs currently lack the feedback segment of the communication loop between user and device. Sensory substitution feedback can close this gap, but sometimes this improvement only persists when users cannot see their prosthesis, suggesting the provided feedback is redundant with vision. Thus, given the choice, users rely on vision over artificial feedback. To effectively augment vision, sensory feedback must provide information that vision cannot provide or provides poorly. Although vision is known to be less precise at estimating speed than position, no work has compared speed precision of biomimetic arm movements. In this study, we investigated the uncertainty of visual speed estimates as defined by different virtual arm movements. We found that uncertainty was greatest for visual estimates of joint speeds, compared to absolute rotational or linear endpoint speeds. Furthermore, this uncertainty increased when the joint reference frame speed varied over time, potentially caused by an overestimation of joint speed. Finally, we demonstrate a joint-based sensory substitution feedback paradigm capable of significantly reducing joint speed uncertainty when paired with vision. Ultimately, this work may lead to improved prosthesis control and capacity for motor learning.
Partial-hand amputees often retain good residual wrist motion, which is essential for functional activities involving use of the hand. Thus, a crucial design criterion for a myoelectric, partial-hand prosthesis control scheme is that it allows the user to retain residual wrist motion. Pattern recognition (PR) of electromyographic (EMG) signals is a well-studied method of controlling myoelectric prostheses. However, wrist motion degrades a PR system’s ability to correctly predict hand-grasp patterns. We studied the effects of (1) window length and number of hand-grasps, (2) static and dynamic wrist motion, and (3) EMG muscle source on the ability of a PR-based control scheme to classify functional hand-grasp patterns. Our results show that training PR classifiers with both extrinsic and intrinsic muscle EMG yields a lower error rate than training with either group by itself (p<0.001); and that training in only variable wrist positions, with only dynamic wrist movements, or with both variable wrist positions and movements results in lower error rates than training in only the neutral wrist position (p<0.001). Finally, our results show that both an increase in window length and a decrease in the number of grasps available to the classifier significantly decrease classification error (p<0.001). These results remained consistent whether the classifier selected or maintained a hand-grasp.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.