Gesture recognition in non-intrusive muscle-computer interfaces is usually based on windowed descriptive and discriminatory surface electromyography (sEMG) features because the recorded amplitude of a myoelectric signal may rapidly fluctuate between voltages above and below zero. Here, we present that the patterns inside the instantaneous values of high-density sEMG enables gesture recognition to be performed merely with sEMG signals at a specific instant. We introduce the concept of an sEMG image spatially composed from high-density sEMG and verify our findings from a computational perspective with experiments on gesture recognition based on sEMG images with a classification scheme of a deep convolutional network. Without any windowed features, the resultant recognition accuracy of an 8-gesture within-subject test reached 89.3% on a single frame of sEMG image and reached 99.0% using simple majority voting over 40 frames with a 1,000 Hz sampling rate. Experiments on the recognition of 52 gestures of NinaPro database and 27 gestures of CSL-HDEMG database also validated that our approach outperforms state-of-the-arts methods. Our findings are a starting point for the development of more fluid and natural muscle-computer interfaces with very little observational latency. For example, active prostheses and exoskeletons based on high-density electrodes could be controlled with instantaneous responses.A muscle-computer interface (MCI) is a communication system that transforms myoelectrical signals from mere reflections of muscle activities into interaction commands that convey the intent of the user s movement. A muscle is composed of many motor units (MU), and the "discharge" or "firing" of each MU activation generates a "motor unit action potential" (MUAP), which is the sum of the contributions from the individual fibres that compose the MU. Surface electromyography (sEMG) records a muscle's electrical activity from the surface of the skin and thus reflects the generation and propagation of MUAPs. sEMG-based gesture recognition is the technical core of non-intrusive muscle-computer interfaces, which are often directed at controlling active prostheses 1 , wheelchairs 2 , exoskeletons 3 or providing an alternative interaction method for video games 4 . Gesture recognition based on sEMG can be naturally framed as a pattern classification problem in which a classifier is usually trained through supervised learning. It has generally been accepted that feeding the instantaneous value of myoelectric signals directly to a classifier is impractical and useless for pattern recognition techniques 5,6 . This belief is based on an empirical assumption that the instantaneous values of myoelectric signals are useless for gesture recognition because the raw myoelectric signal in each channel is non-stationary, non-linear, stochastic and unpredictable [7][8][9] . These features of the myoelectric signal reflect the constant variation of the actual set of recruited motor units within the range of available motor units and the arbitra...
High-density surface electromyography (HD-sEMG) is to record muscles’ electrical activity from a restricted area of the skin by using two dimensional arrays of closely spaced electrodes. This technique allows the analysis and modelling of sEMG signals in both the temporal and spatial domains, leading to new possibilities for studying next-generation muscle-computer interfaces (MCIs). sEMG-based gesture recognition has usually been investigated in an intra-session scenario, and the absence of a standard benchmark database limits the use of HD-sEMG in real-world MCI. To address these problems, we present a benchmark database of HD-sEMG recordings of hand gestures performed by 23 participants, based on an 8 × 16 electrode array, and propose a deep-learning-based domain adaptation framework to enhance sEMG-based inter-session gesture recognition. Experiments on NinaPro, CSL-HDEMG and our CapgMyo dataset validate that our approach outperforms state-of-the-arts methods on intra-session and effectively improved inter-session gesture recognition.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.