As development toward multi-fingered dexterous prosthetic hands continues, there is a growing need for more flexible and intuitive control schemes. Through the use of generalized electrode placement and well-established methods of pattern recognition, we have developed a basis for asynchronous decoding of finger positions. With the present method, correlations as large as 0.91 and mean overall decoding errors of approximately 11% have been achieved with average decoding errors of between decoded and actual conformation of the metacarpophalangeal joints of individual fingers. It is hoped that these results will serve as a foundation from which to encourage further investigation into more intuitive methods of myoelectric control of powered upper limb prostheses.
This study presents the development of a myoelectric decoding algorithm capable of continuous online decoding of finger movements with the intended eventual application for use in prostheses for transradial amputees. The effectiveness of the algorithm was evaluated through controlling a multi-fingered hand in a virtual environment. Two intact limbed adult subjects were able to use myoelectric signals collected from 8 bipolar electrodes to control four fingers in real-time to touch and maintain contact with targets appearing at various points in the flexion space of the hand. In these tasks, subjects achieved accuracies of 94% when target regions extended +/- 11.5 degrees about a target angle and 81% when the target region extended only +/- 5.75 degrees about the target angle. The real-time virtual system provides a practical and economic way to develop and train algorithms and amputee subjects using dexterous prostheses.
Introduction Electromyogram (EMG)-based pattern recognition control of prosthetic limbs is the current state of the art. However, these systems commonly fail when the user attempts to use the limb in a different position from which it was trained, resulting in significantly reduced functionality. Robust models for decoding EMG signals, accounting for specific changes that occur with positional variation, are needed to reduce this negative effect. Methods Ten able-bodied participants and two participants with transradial amputation were included in the study. Participants were fitted with surface EMG electrodes as well as a network of inertial measurement units (IMUs) to monitor limb position during tasks. Positional covariates including elbow angle, hand height, and forearm angle were analyzed for impact on EMG signal features to drive the generation of unique LDA classifier algorithms. Offline analysis of classification error for each control scheme was then completed. Results Elbow angle demonstrated the strongest impact on the EMG signal. Hand height also demonstrated a consistent increase in EMG signal with increasing height. Incorporating these specific covariates into classifier algorithms improved performance compared with classifiers trained in the conventional fashion (single-position EMG). However, able-bodied participants demonstrated lowest classification error when data from random-training positions were incorporated (10.3% vs. 17.2% single position, P < 0.001). These results were even more dramatic in participants with amputation (with five training repetitions: 7.14% vs. 32.08%, P < 0.001). Performance differences between single-position and random-position training for individuals with amputations were significantly larger when the user was wearing his/her prosthesis than otherwise. Conclusions Incorporating position-specific covariates into myoelectric classification algorithms can dramatically improve robustness and classification accuracy when using the prosthesis in the user's entire workspace. In single-position training paradigms, classification error rates were 39.22% and 32.18%, respectively, for two participants with amputation and resulted in unusable classifiers. Conversely, classification errors were at 10% for able-bodied and near 7% for participants with amputation when at least five training repetitions were used to train either a random position or position-specific classifier. As position-tracking hardware becomes smaller and can be implemented into socket designs, incorporating this information into classifier algorithms can dramatically reduce the limb-position effect. Current users can experience reduction of the limb-position effect through training in multiple random positions.
Brain-machine interfaces (BMIs) are a rapidly progressing technology with the potential to restore function to victims of severe paralysis via neural control of robotic systems. Great strides have been made in directly mapping a user's cortical activity to control of the individual degrees of freedom of robotic end-effectors. While BMIs have yet to achieve the level of reliability desired for widespread clinical use, environmental sensors (e.g. RGB-D cameras for object detection) and prior knowledge of common movement trajectories hold great potential for improving system performance. Here we present a novel sensor fusion paradigm for BMIs that capitalizes on information able to be extracted from the environment to greatly improve the performance of control. This was accomplished by using dynamic movement primitives to model the 3D endpoint trajectories of manipulating various objects. We then used a switching unscented Kalman filter to continuously arbitrate between the 3D endpoint kinematics predicted by the dynamic movement primitives and control derived from neural signals. We experimentally validated our system by decoding 3D endpoint trajectories executed by a non-human primate manipulating four different objects at various locations. Performance using our system showed a dramatic improvement over using neural signals alone, with median distance between actual and decoded trajectories decreasing from 31.1 cm to 9.9 cm, and mean correlation increasing from 0.80 to 0.98. Our results indicate that our sensor fusion framework can dramatically increase the fidelity of neural prosthetic trajectory decoding.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.