BackgroundMyoelectric pattern recognition systems can decode movement intention to drive upper-limb prostheses. Despite recent advances in academic research, the commercial adoption of such systems remains low. This limitation is mainly due to the lack of classification robustness and a simultaneous requirement for a large number of electromyogram (EMG) electrodes. We propose to address these two issues by using a multi-modal approach which combines surface electromyography (sEMG) with inertial measurements (IMs) and an appropriate training data collection paradigm. We demonstrate that this can significantly improve classification performance as compared to conventional techniques exclusively based on sEMG signals.MethodsWe collected and analyzed a large dataset comprising recordings with 20 able-bodied and two amputee participants executing 40 movements. Additionally, we conducted a novel real-time prosthetic hand control experiment with 11 able-bodied subjects and an amputee by using a state-of-the-art commercial prosthetic hand. A systematic performance comparison was carried out to investigate the potential benefit of incorporating IMs in prosthetic hand control.ResultsThe inclusion of IM data improved performance significantly, by increasing classification accuracy (CA) in the offline analysis and improving completion rates (CRs) in the real-time experiment. Our findings were consistent across able-bodied and amputee subjects. Integrating the sEMG electrodes and IM sensors within a single sensor package enabled us to achieve high-level performance by using on average 4-6 sensors.ConclusionsThe results from our experiments suggest that IMs can form an excellent complimentary source signal for upper-limb myoelectric prostheses. We trust that multi-modal control solutions have the potential of improving the usability of upper-extremity prostheses in real-life applications.Electronic supplementary materialThe online version of this article (doi:10.1186/s12984-017-0284-4) contains supplementary material, which is available to authorized users.
Machine learning-based myoelectric control is regarded as an intuitive paradigm, because of the mapping it creates between muscle co-activation patterns and prosthesis movements that aims to simulate the physiological pathways found in the human arm. Despite that, there has been evidence that closed-loop interaction with a classification-based interface results in user adaptation, which leads to performance improvement with experience. Recently, there has been a focus shift toward continuous prosthesis control, yet little is known about whether and how user adaptation affects myoelectric control performance in dexterous, intuitive tasks. We investigate the effect of short-term adaptation with independent finger position control by conducting real-time experiments with 10 able-bodied and two transradial amputee subjects. We demonstrate that despite using an intuitive decoder, experience leads to significant improvements in performance. We argue that this is due to the lack of an utterly natural control scheme, which is mainly caused by differences in the anatomy of human and artificial hands, movement intent decoding inaccuracies, and lack of proprioception. Finally, we extend previous work in classification-based and wrist continuous control by verifying that offline analyses cannot reliably predict real-time performance, thereby reiterating the importance of validating myoelectric control algorithms with real-time experiments.
In the field of upper-limb myoelectric prosthesis control, the use of statistical and machine learning methods has been long proposed as a means of enabling intuitive grip selection and actuation. Recently, this paradigm has found its way toward commercial adoption. Machine learning-based prosthesis control typically relies on the use of a large number of electrodes. Here, we propose an end-to-end strategy for multi-grip, classificationbased prosthesis control using only two sensors, comprising electromyography (EMG) electrodes and inertial measurement units (IMUs). We emphasize the importance of accurately estimating posterior class probabilities and rejecting predictions made with low confidence, so as to minimize the rate of unintended prosthesis activations. To that end, we propose a confidence-based error rejection strategy using grip-specific thresholds. We evaluate the efficacy of the proposed system with real-time pick and place experiments using a commercial multi-articulated prosthetic hand and involving 12 able-bodied and two transradial (i.e., below-elbow) amputee participants. Results promise the potential for deploying intuitive, classificationbased multi-grip control in existing upper-limb prosthetic systems subject to small modifications.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.