Pattern recognition of electromyography (EMG) signals can potentially improve the performance of myoelectric control for upper limb prostheses with respect to current clinical approaches based on direct control. However, the choice of features for classification is challenging and impacts long-term performance. Here, we propose the use of EMG raw signals as direct inputs to deep networks with intrinsic feature extraction capabilities recorded over multiple days. Seven able-bodied subjects performed six active motions (plus rest), and EMG signals were recorded for 15 consecutive days with two sessions per day using the MYO armband (MYB, a wearable EMG sensor). The classification was performed by a convolutional neural network (CNN) with raw bipolar EMG samples as the inputs, and the performance was compared with linear discriminant analysis (LDA) and stacked sparse autoencoders with features (SSAE-f) and raw samples (SSAE-r) as inputs. CNN outperformed (lower classification error) both LDA and SSAE-r in the within-session, between sessions on same day, between the pair of days, and leave-out one-day evaluation (p < 0.001) analyses. However, no significant difference was found between CNN and SSAE-f. These results demonstrated that CNN significantly improved performance and increased robustness over time compared with standard LDA with associated handcrafted features. This data-driven features extraction approach may overcome the problem of the feature calibration and selection in myoelectric control.
Electromyography (EMG) is a measure of electrical activity generated by the contraction of muscles. Non-invasive surface EMG (sEMG)-based pattern recognition methods have shown the potential for upper limb prosthesis control. However, it is still insufficient for natural control. Recent advancements in deep learning have shown tremendous progress in biosignal processing. Multiple architectures have been proposed yielding high accuracies (>95%) for offline analysis, yet the delay caused due to optimization of the system remains a challenge for its real-time application. From this arises a need for optimized deep learning architecture based on fine-tuned hyper-parameters. Although the chance of achieving convergence is random, however, it is important to observe that the performance gain made is significant enough to justify extra computation. In this study, the convolutional neural network (CNN) was implemented to decode hand gestures from the sEMG data recorded from 18 subjects to investigate the effect of hyper-parameters on each hand gesture. Results showed that the learning rate set to either 0.0001 or 0.001 with 80-100 epochs significantly outperformed (p < 0.05) other considerations. In addition, it was observed that regardless of network configuration some motions (close hand, flex hand, extend the hand and fine grip) performed better (83.7% ± 13.5%, 71.2% ± 20.2%, 82.6% ± 13.9% and 74.6% ± 15%, respectively) throughout the course of study. So, a robust and stable myoelectric control can be designed on the basis of the best performing hand motions. With improved recognition and uniform gain in performance, the deep learning-based approach has the potential to be a more robust alternative to traditional machine learning algorithms.
Advances in myoelectric interfaces have increased the use of wearable prosthetics including robotic arms. Although promising results have been achieved with pattern recognition-based control schemes, control robustness requires improvement to increase user acceptance of prosthetic hands. The aim of this study was to quantify the performance of stacked sparse autoencoders (SSAE), an emerging deep learning technique used to improve myoelectric control and to compare multiday surface electromyography (sEMG) and intramuscular (iEMG) recordings. Ten able-bodied and six amputee subjects with average ages of 24.5 and 34.5 years, respectively, were evaluated using offline classification error as the performance matric. Surface and intramuscular EMG were concurrently recorded while each subject performed 11 hand motions. Performance of SSAE was compared with that of linear discriminant analysis (LDA) classifier. Within-day analysis showed that SSAE (1.38 ± 1.38%) outperformed LDA (8.09 ± 4.53%) using both the sEMG and iEMG data from both able-bodied and amputee subjects (p < 0.001). In the between-day analysis, SSAE outperformed LDA (7.19 ± 9.55% vs. 22.25 ± 11.09%) using both sEMG and iEMG data from both able-bodied and amputee subjects. No significant difference in performance was observed for within-day and pairs of days with eight-fold validation when using iEMG and sEMG with SSAE, whereas sEMG outperformed iEMG (p < 0.001) in between-day analysis both with two-fold and seven-fold validation schemes. The results obtained in this study imply that SSAE can significantly improve the performance of pattern recognition-based myoelectric control scheme and has the strength to extract deep information hidden in the EMG data.
Clinical treatment of skin lesion is primarily dependent on timely detection and delimitation of lesion boundaries for accurate cancerous region localization. Prevalence of skin cancer is on the higher side, especially that of melanoma, which is aggressive in nature due to its high metastasis rate. Therefore, timely diagnosis is critical for its treatment before the onset of malignancy. To address this problem, medical imaging is used for the analysis and segmentation of lesion boundaries from dermoscopic images. Various methods have been used, ranging from visual inspection to the textural analysis of the images. However, accuracy of these methods is low for proper clinical treatment because of the sensitivity involved in surgical procedures or drug application. This presents an opportunity to develop an automated model with good accuracy so that it may be used in a clinical setting. This paper proposes an automated method for segmenting lesion boundaries that combines two architectures, the U-Net and the ResNet, collectively called Res-Unet. Moreover, we also used image inpainting for hair removal, which improved the segmentation results significantly. We trained our model on the ISIC 2017 dataset and validated it on the ISIC 2017 test set as well as the PH2 dataset. Our proposed model attained a Jaccard Index of 0.772 on the ISIC 2017 test set and 0.854 on the PH2 dataset, which are comparable results to the current available state-of-the-art techniques.
Automated segmentation of brain tumour from multimodal MR images is pivotal for the analysis and monitoring of disease progression. As gliomas are malignant and heterogeneous, efficient and accurate segmentation techniques are used for the successful delineation of tumours into intra-tumoural classes. Deep learning algorithms outperform on tasks of semantic segmentation as opposed to the more conventional, context-based computer vision approaches. Extensively used for biomedical image segmentation, Convolutional Neural Networks have significantly improved the state-of-the-art accuracy on the task of brain tumour segmentation. In this paper, we propose an ensemble of two segmentation networks: a 3D CNN and a U-Net, in a significant yet straightforward combinative technique that results in better and accurate predictions. Both models were trained separately on the BraTS-19 challenge dataset and evaluated to yield segmentation maps which considerably differed from each other in terms of segmented tumour sub-regions and were ensembled variably to achieve the final prediction. The suggested ensemble achieved dice scores of 0.750, 0.906 and 0.846 for enhancing tumour, whole tumour, and tumour core, respectively, on the validation set, performing favourably in comparison to the state-of-the-art architectures currently available.
Accurate segmentation of the vertebrae from medical images plays an important role in computer-aided diagnoses (CADs). It provides an initial and early diagnosis of various vertebral abnormalities to doctors and radiologists. Vertebrae segmentation is very important but difficult task in medical imaging due to low-contrast imaging and noise. It becomes more challenging when dealing with fractured (osteoporotic) cases. This work is dedicated to address the challenging problem of vertebra segmentation. In the past, various segmentation techniques of vertebrae have been proposed. Recently, deep learning techniques have been introduced in biomedical image processing for segmentation and characterization of several abnormalities. These techniques are becoming popular for segmentation purposes due to their robustness and accuracy. In this paper, we present a novel combination of traditional region-based level set with deep learning framework in order to predict shape of vertebral bones accurately; thus, it would be able to handle the fractured cases efficiently. We termed this novel Framework as BFU-Net^which is a powerful and practical framework to handle fractured vertebrae segmentation efficiently. The proposed method was successfully evaluated on two different challenging datasets: (1) 20 CT scans, 15 healthy cases, and 5 fractured cases provided at spine segmentation challenge CSI 2014; (2) 25 CT image data (both healthy and fractured cases) provided at spine segmentation challenge CSI 2016 or xVertSeg.v1 challenge. We have achieved promising results on our proposed technique especially on fractured cases. Dice score was found to be 96.4 ± 0.8% without fractured cases and 92.8 ± 1.9% with fractured cases in CSI 2014 dataset (lumber and thoracic). Similarly, dice score was 95.2 ± 1.9% on 15 CT dataset (with given ground truths) and 95.4 ± 2.1% on total 25 CT dataset for CSI 2016 datasets (with 10 annotated CT datasets). The proposed technique outperformed other state-of-the-art techniques and handled the fractured cases for the first time efficiently.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.