BackgroundMyoelectric pattern recognition systems can decode movement intention to drive upper-limb prostheses. Despite recent advances in academic research, the commercial adoption of such systems remains low. This limitation is mainly due to the lack of classification robustness and a simultaneous requirement for a large number of electromyogram (EMG) electrodes. We propose to address these two issues by using a multi-modal approach which combines surface electromyography (sEMG) with inertial measurements (IMs) and an appropriate training data collection paradigm. We demonstrate that this can significantly improve classification performance as compared to conventional techniques exclusively based on sEMG signals.MethodsWe collected and analyzed a large dataset comprising recordings with 20 able-bodied and two amputee participants executing 40 movements. Additionally, we conducted a novel real-time prosthetic hand control experiment with 11 able-bodied subjects and an amputee by using a state-of-the-art commercial prosthetic hand. A systematic performance comparison was carried out to investigate the potential benefit of incorporating IMs in prosthetic hand control.ResultsThe inclusion of IM data improved performance significantly, by increasing classification accuracy (CA) in the offline analysis and improving completion rates (CRs) in the real-time experiment. Our findings were consistent across able-bodied and amputee subjects. Integrating the sEMG electrodes and IM sensors within a single sensor package enabled us to achieve high-level performance by using on average 4-6 sensors.ConclusionsThe results from our experiments suggest that IMs can form an excellent complimentary source signal for upper-limb myoelectric prostheses. We trust that multi-modal control solutions have the potential of improving the usability of upper-extremity prostheses in real-life applications.Electronic supplementary materialThe online version of this article (doi:10.1186/s12984-017-0284-4) contains supplementary material, which is available to authorized users.
Machine learning-based myoelectric control is regarded as an intuitive paradigm, because of the mapping it creates between muscle co-activation patterns and prosthesis movements that aims to simulate the physiological pathways found in the human arm. Despite that, there has been evidence that closed-loop interaction with a classification-based interface results in user adaptation, which leads to performance improvement with experience. Recently, there has been a focus shift toward continuous prosthesis control, yet little is known about whether and how user adaptation affects myoelectric control performance in dexterous, intuitive tasks. We investigate the effect of short-term adaptation with independent finger position control by conducting real-time experiments with 10 able-bodied and two transradial amputee subjects. We demonstrate that despite using an intuitive decoder, experience leads to significant improvements in performance. We argue that this is due to the lack of an utterly natural control scheme, which is mainly caused by differences in the anatomy of human and artificial hands, movement intent decoding inaccuracies, and lack of proprioception. Finally, we extend previous work in classification-based and wrist continuous control by verifying that offline analyses cannot reliably predict real-time performance, thereby reiterating the importance of validating myoelectric control algorithms with real-time experiments.
In the field of upper-limb myoelectric prosthesis control, the use of statistical and machine learning methods has been long proposed as a means of enabling intuitive grip selection and actuation. Recently, this paradigm has found its way toward commercial adoption. Machine learning-based prosthesis control typically relies on the use of a large number of electrodes. Here, we propose an end-to-end strategy for multi-grip, classificationbased prosthesis control using only two sensors, comprising electromyography (EMG) electrodes and inertial measurement units (IMUs). We emphasize the importance of accurately estimating posterior class probabilities and rejecting predictions made with low confidence, so as to minimize the rate of unintended prosthesis activations. To that end, we propose a confidence-based error rejection strategy using grip-specific thresholds. We evaluate the efficacy of the proposed system with real-time pick and place experiments using a commercial multi-articulated prosthetic hand and involving 12 able-bodied and two transradial (i.e., below-elbow) amputee participants. Results promise the potential for deploying intuitive, classificationbased multi-grip control in existing upper-limb prosthetic systems subject to small modifications.
The reconstruction of finger movement activity from surface electromyography (sEMG) has been proposed for the proportional and simultaneous myoelectric control of multiple degrees-of-freedom (DOFs). In this paper, we propose a framework for assessing decoding performance on novel movements, that is movements not included in the training dataset. We then use our proposed framework to compare the performance of linear and kernel ridge regression for the reconstruction of finger movement from sEMG and accelerometry. Our findings provide evidence that, although the performance of the non-linear method is superior for movements seen by the decoder during the training phase, the performance of the two algorithms is comparable when generalizing to novel movements. *A. Krasoulis is supported by grants EP/F500385/1 and BB/F529254/1 for
Prosthetic devices for hand difference have advanced considerably in recent years, to the point where the mechanical dexterity of a state-of-the-art prosthetic hand approaches that of the natural hand. Control options for users, however, have not kept pace, meaning that the new devices are not used to their full potential. Promising developments in control technology reported in the literature have met with limited commercial and clinical success. We have previously described a biomechanical model of the hand that could be used for prosthesis control. In this study, we report on three key elements of the biomechanical simulations relevant to prosthesis control: we show the performance of the model in replicating recorded hand kinematics and find average correlations of 0.89 between modelled and recorded motions; we show that the computational performance of the simulations is fast enough to achieve real-time control with a robotic hand in the loop; and we describe the use of the model for controlling object gripping. Despite some limitations in accessing sufficient driving signals, the model performance shows promise as a controller for prosthetic hands when driven with recorded EMG signals. We identify areas for future work to address these limitations.
People who either use an upper limb prosthesis and/or have used services provided by a prosthetic rehabilitation centre, hereafter called users, are yet to benefit from the fast-paced growth in academic knowledge within the field of upper limb prosthetics. Crucially over the past decade, research has acknowledged the limitations of conducting laboratory-based studies for clinical translation. This has led to an increase, albeit rather small, in trials that gather real-world user data. Multi-stakeholder collaboration is critical within such trials, especially between researchers, users, and clinicians, as well as policy makers, charity representatives, and industry specialists. This paper presents a co-creation model that enables researchers to collaborate with multiple stakeholders, including users, throughout the duration of a study. This approach can lead to a transition in defining the roles of stakeholders, such as users, from participants to co-researchers. This presents a scenario whereby the boundaries between research and participation become blurred and ethical considerations may become complex. However, the time and resources that are required to conduct co-creation within academia can lead to greater impact and benefit the people that the research aims to serve.
Cochlear implants (CIS) require efficient speech processing to maximize information transmission to the brain, especially in noise. A novel CI processing strategy was proposed in our previous studies, in which sparsity-constrained non-negative matrix factorization (NMF) was applied to the envelope matrix in order to improve the CI performance in noisy environments. It showed that the algorithm needs to be adaptive, rather than fixed, in order to adjust to acoustical conditions and individual characteristics. Here, we explore the benefit of a system that allows the user to adjust the signal processing in real time according to their individual listening needs and their individual hearing capabilities. In this system, which is based on MATLAB®, SIMULINK® and the xPC Target™ environment, the input/outupt (I/O) boards are interfaced between the SIMULINK blocks and the CI stimulation system, such that the output can be controlled successfully in the manner of a hardware-in-the-loop (HIL) simulation, hence offering a convenient way to implement a real time signal processing module that does not require any low level language. The sparsity constrained parameter of the algorithm was adapted online subjectively during an experiment with normal-hearing subjects and noise vocoded speech simulation. Results show that subjects chose different parameter values according to their own intelligibility preferences, indicating that adaptive real time algorithms are beneficial to fully explore subjective preferences. We conclude that the adaptive real time systems are beneficial for the experimental design, and such systems allow one to conduct psychophysical experiments with high ecological validity.
Magnetomyography utilizes magnetic sensors to record small magnetic fields produced by the electrical activity of muscles, which also gives rise to the electromyogram (EMG) signal typically recorded with surface electrodes. Detection and recording of these small fields requires sensitive magnetic sensors possibly equipped with a CMOS readout system. This paper presents a highly sensitive Hall sensor fabricated in a standard 0.18 µm CMOS technology for future low-field MMG applications. Our experimental results show the proposed Hall sensor achieves a high current mode sensitivity of approximately 2400 V/A/mT. Further refinement is required to enable measurement of MMG signals from muscles.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.