Abstract--Reported studies on pattern recognition of electromyograms (EMG) for the control of prosthetic devices traditionally focus on classification accuracy of signals recorded in a laboratory. The difference between the constrained nature in which such data are often collected and the unpredictable nature of prosthetic use is an example of the semantic gap between research findings and a viable clinical implementation. In this work, we demonstrate that the variations in limb position associated with normal use can have a substantial impact on the robustness of EMG pattern recognition, as illustrated by an increase in average classification error from 3.8% to 18%. We propose to solve this problem by (1) collecting EMG data and training the classifier in multiple limb positions and by (2) measuring the limb position with accelerometers. Applying these two methods to data from ten normally limbed subjects, we reduce the average classification error from 18% to 5.7% and 5.0%, respectively. Our study shows how sensor fusion (using EMG and accelerometers) may be an efficient method to mitigate the effect of limb position and improve classification accuracy.
The continuing enhancement of the surgical environment in the digital age has led to a number of innovations being highlighted as potential disruptive technologies in the surgical workplace. Augmented reality (AR) and virtual reality (VR) are rapidly becoming increasingly available, accessible and importantly affordable, hence their application into healthcare to enhance the medical use of data is certain. Whether it relates to anatomy, intraoperative surgery, or post-operative rehabilitation, applications are already being investigated for their role in the surgeons armamentarium. Here we provide an introduction to the technology and the potential areas of development in the surgical arena.
Recently, renewed focus on prosthetics research has pushed the field to provide more clinically relevant outcomes. One way to work towards this goal is to examine the differences between research and clinical results. The constrained nature in which offline training and test data is often collected compared to the dynamic nature of prosthetic use is just one example. In this work, we demonstrate that variations in limb position after training can have a substantial impact on the robustness of myoelectric pattern recognition.
Protein–protein interaction (PPI) maps provide insight into cellular biology and have received considerable attention in the post-genomic era. While large-scale experimental approaches have generated large collections of experimentally determined PPIs, technical limitations preclude certain PPIs from detection. Recently, we demonstrated that yeast PPIs can be computationally predicted using re-occurring short polypeptide sequences between known interacting protein pairs. However, the computational requirements and low specificity made this method unsuitable for large-scale investigations. Here, we report an improved approach, which exhibits a specificity of ∼99.95% and executes 16 000 times faster. Importantly, we report the first all-to-all sequence-based computational screen of PPIs in yeast, Saccharomyces cerevisiae in which we identify 29 589 high confidence interactions of ∼2 × 107 possible pairs. Of these, 14 438 PPIs have not been previously reported and may represent novel interactions. In particular, these results reveal a richer set of membrane protein interactions, not readily amenable to experimental investigations. From the novel PPIs, a novel putative protein complex comprised largely of membrane proteins was revealed. In addition, two novel gene functions were predicted and experimentally confirmed to affect the efficiency of non-homologous end-joining, providing further support for the usefulness of the identified PPIs in biological investigations.
It is proposed that myo-electric signals can be used to augment conventional speech-recognition systems to improve their performance under acoustically noisy conditions (e.g. in an aircraft cockpit). A preliminary study is performed to ascertain the presence of speech information within myo-electric signals from facial muscles. Five surface myo-electric signals are recorded during speech, using Ag-AgCl button electrodes embedded in a pilot oxygen mask. An acoustic channel is also recorded to enable segmentation of the recorded myo-electric signal. These segments are processed off-line, using a wavelet transform feature set, and classified with linear discriminant analysis. Two experiments are performed, using a ten-word vocabulary consisting of the numbers 'zero' to 'nine'. Five subjects are tested in the first experiment, where the vocabulary is not randomised. Subjects repeat each word continuously for 1 min; classification errors range from 0.0% to 6.1%. Two of the subjects perform the second experiment, saying words from the vocabulary randomly; classification errors are 2.7% and 10.4%. The results demonstrate that there is excellent potential for using surface myo-electric signals to enhance the performance of a conventional speech-recognition system.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.