Speech disorders such as dysarthria are common and frequent after suffering a stroke. Speech rehabilitation performed by a speech-language pathologist is needed to improve and recover. However, in Thailand, there is a shortage of speech-language pathologists. In this paper, we present a syllable recognition system, which can be deployable in a speech rehabilitation system to provide support to the limited speech-language pathologists available. The proposed system is based on a multimodal fusion of acoustic signal and surface electromyography (sEMG) collected from facial muscles. Multimodal data fusion is studied to improve signal collection under noisy situations while reducing the number of electrodes needed. The signals are simultaneously collected while articulating 12 Thai syllables designed for rehabilitation exercises. Several features are extracted from sEMG signals and five channels are studied. The best combination of features and channels is chosen to be fused with the mel-frequency cepstral coefficients extracted from the acoustic signal. The feature vector from each signal source is projected by spectral regression extreme learning machine and concatenated. Data from seven healthy subjects were collected for evaluation purposes. Results show that the multimodal fusion outperforms the use of a single signal source achieving up to ∼ 98% of accuracy. In other words, an accuracy improvement up to 5% can be achieved when using the proposed multimodal fusion. Moreover, its low standard deviations in classification accuracy compared to those from the unimodal fusion indicate the improvement in the robustness of the syllable recognition.
Upper limb amputation is a significant limitation for achieving routine activities. Myoelectric signals detected by electrodes well-known as Electromyography (EMG) have been targeted to control upper limb prostheses of such lost limbs. Unfortunately, the acquisition, processing and use of such myoelectric signals are sophisticated. Furthermore, it necessarily requires complex computation to fulfil accuracy, robustness, and time-consumption execution for the real-time prosthesis application. Thus, machine learning schemes for pattern recognition are a potential approach to improve the traditional control for hand prostheses due to the movement of users and muscle contraction. This paper presents real-time hand posture recognition based on three hand postures using surface EMG (sEMG) signals. sEMG signals are acquired by the electrode channel and simultaneously collected while making a hand posture. Performance evaluation relies on classification accuracy and time consumption. The performance of six real-time recognition models is evaluated which combine two projection techniques and three classifiers. Results indicate that EMG-based pattern recognition (EMG-PR) control outperforms the traditional control for hand prostheses in real-time application. The highest classification accuracy is approximately 96%, whereas the lowest time consumption is 4 ms. In addition, the accuracy is dropped when the number of electrodes decreases nearly to 3%. These outcomes can apply to real-time hand prostheses to alleviate the limited prostheses available.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.