Sleep stage classification is a fundamental but cumbersome task in sleep analysis. To score the sleep stage automatically, this study presents a stage classification method based on a two-stage neural network. The feature learning stage as the first stage can fuse network trained features with traditional hand-crafted features. A recurrent neural network (RNN) in the second stage is fully utilized for learning temporal information between sleep epochs and obtaining classification results. To solve serious sample imbalance problem, a novel pre-training process combined with data augmentation was introduced. The proposed method was evaluated by two public databases, the Sleep-EDF and Sleep Apnea (SA). The proposed method can achieve the F1-score and Kappa coefficient of 0.806 and 0.80 for healthy subjects, respectively, and achieve 0.790 and 0.74 for the subjects with suspect sleep disorders, respectively. The results show that the method can achieve better performance compared to the state-of-the-art methods for the same databases. Model analysis displayed that the combination of the hand-crafted features and network trained features can improve the classification performance via the comparison experiments. In addition, the RNN is a good choice for learning temporal information in sleep epochs. Besides, the pre-training process with data augmentation is verified that can reduce the impact of sample imbalance. The proposed model has potential to exploit sleep information comprehensively. INDEX TERMS Sleep stage classification, feature learning, sequence learning, EEG signal. JIAHAO FAN received the B.S. degree in communication engineering in 2013, and the M.S. degree in electronic and communication engineering from Jilin University, in 2016. He is currently pursuing the Ph.D. degree with the
Commonly used sensors like accelerometers, gyroscopes, surface electromyography sensors, etc., which provide a convenient and practical solution for human activity recognition (HAR), have gained extensive attention. However, which kind of sensor can provide adequate information in achieving a satisfactory performance, or whether the position of a single sensor would play a significant effect on the performance in HAR are sparsely studied. In this paper, a comparative study to fully investigate the performance of the aforementioned sensors for classifying four activities (walking, tooth brushing, face washing, drinking) is explored. Sensors are spatially distributed over the human body, and subjects are categorized into three groups (able-bodied people, stroke survivors, and the union of both). Performances of using accelerometer, gyroscope, sEMG, and their combination in each group are evaluated by adopting the Support Vector Machine classifier with the Leave-One-Subject-Out Cross-Validation technique, and the optimal sensor position for each kind of sensor is presented based on the accuracy. Experimental results show that using the accelerometer could obtain the best performance in each group. The highest accuracy of HAR involving stroke survivors was 95.84 ± 1.75% (mean ± standard error), achieved by the accelerometer attached to the extensor carpi ulnaris. Furthermore, taking the practical application of HAR into consideration, a novel approach to distinguish various activities of stroke survivors based on a pre-trained HAR model built on healthy subjects is proposed, the highest accuracy of which is 77.89 ± 4.81% (mean ± standard error) with the accelerometer attached to the extensor carpi ulnaris.
Objective. Automatic sleep staging models suffer from an inherent class imbalance problem (CIP), which hinders the classifiers from achieving a better performance. To address this issue, we systematically studied sleep electroencephalogram data augmentation (DA) approaches. Furthermore, we modified and transferred novel DA approaches from related research fields, yielding new efficient ways to enhance sleep datasets. Approach. This study covers five DA methods, including repeating minority classes, morphological change, signal segmentation and recombination, dataset-to-dataset transfer, as well as generative adversarial network (GAN). We evaluated these mentioned DA methods by a sleep staging model on two datasets, the Montreal archive of sleep studies (MASS) and Sleep-EDF. We used a classification model with a typical convolutional neural network architecture to evaluate the effectiveness of the mentioned DA approaches. We also conducted a comprehensive analysis of these methods. Main results. The classification results showed that DA methods, especially DA by GAN, significantly improved the total classification performance in comparison with the baseline. The improvement of accuracy, F1 score and Cohen Kappa coefficient range from 0.90% to 3.79%, 0.73% to 3.48%, 2.61% to 5.43% on MASS and 1.36% to 4.79%, 1.47% to 4.23%, 2.22% to 4.04% on Sleep-EDF, respectively. DA methods improved the classification performance in most cases, whereas the performance of class N1 showed a subtle degradation in the F1 scores. Significance. Overall, our study proved that DA approaches are efficient in alleviating CIP lying in sleep staging tasks. Meanwhile, this study provided avenues for further improving the sleep staging accuracy using DA methods.
We provide an open access dataset of High densitY Surface Electromyogram (HD-sEMG) Recordings (named "Hyser"), a toolbox for neural interface research, and benchmark results for pattern recognition and EMG-force applications. Data from 20 subjects were acquired twice per subject on different days following the same experimental paradigm. We acquired 256-channel HD-sEMG from forearm muscles during dexterous finger manipulations. This Hyser dataset contains five sub-datasets as: (1) pattern recognition (PR) dataset acquired during 34 commonly used hand gestures, (2) maximal voluntary muscle contraction (MVC) dataset while subjects contracted each individual finger, (3) one-degree of freedom (DoF) dataset acquired during force-varying contraction of each individual finger, (4) N-DoF dataset acquired during prescribed contractions of combinations of multiple fingers, and (5) random task dataset acquired during random contraction of combinations of fingers without any prescribed force trajectory. Dataset 1 can be used for gesture recognition studies. Datasets 2-5 also recorded individual finger forces, thus can be used for studies on proportional control of neuroprostheses. Our toolbox can be used to: (1) analyze each of the five datasets using standard benchmark methods and (2) decompose HD-sEMG signals into motor unit action potentials via independent component analysis. We expect our dataset, toolbox and benchmark analyses can provide a unique platform to promote a wide range of neural interface research and collaboration among neural rehabilitation engineers.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.