Automatic sleep staging with only one channel is a challenging problem in sleep-related research. In this paper, a simple and efficient method named PPG-based multi-class automatic sleep staging (PMSS) is proposed using only a photoplethysmography (PPG) signal. Single-channel PPG data were obtained from four categories of subjects in the CAP sleep database. After the preprocessing of PPG data, feature extraction was performed from the time domain, frequency domain, and nonlinear domain, and a total of 21 features were extracted. Finally, the Light Gradient Boosting Machine (LightGBM) classifier was used for multi-class sleep staging. The accuracy of the multi-class automatic sleep staging was over 70%, and the Cohen’s kappa statistic k was over 0.6. This also showed that the PMSS method can also be applied to stage the sleep state for patients with sleep disorders.
School violence is a serious problem all over the world, and violence detection is significant to protect juveniles. School violence can be detected from the biological signals of victims, and emotion recognition is an important way to detect violence events. In this research, a violence simulation experiment was designed and performed for school violence detection system. Emotional voice from the experiment was extracted and analyzed. Consecutive elimination process (CEP) algorithm was proposed for emotion recognition in this paper. After parameters optimization, SVM was chosen as the classifier and the algorithm was validated by Berlin database which is an emotional speech database of adults, and the mean accuracy for seven emotions was 79.05%. The emotional speech database of children extracted in violence simulation was also classified by SVM classifier with proposed CEP algorithm, and the mean accuracy was 66.13%. The results showed that high classification performance could be achieved with the CEP algorithm. The classification result was also compared with database of adults, and the results indicated that children and adults' voice should be treated differently in speech emotion recognition researches. The accuracy of children database is lower than adult database; the accuracy of violence detection will be improved by other signals in the system.
With the widespread use of emotion recognition, cross-subject emotion recognition based on EEG signals has become a hot topic in affective computing. Electroencephalography (EEG) can be used to detect the brain’s electrical activity associated with different emotions. The aim of this research is to improve the accuracy by enhancing the generalization of features. A Multi-Classifier Fusion method based on mutual information with sequential forward floating selection (MI_SFFS) is proposed. The dataset used in this paper is DEAP, which is a multi-modal open dataset containing 32 EEG channels and multiple other physiological signals. First, high-dimensional features are extracted from 15 EEG channels of DEAP after using a 10 s time window for data slicing. Second, MI and SFFS are integrated as a novel feature-selection method. Then, support vector machine (SVM), k-nearest neighbor (KNN) and random forest (RF) are employed to classify positive and negative emotions to obtain the output probabilities of classifiers as weighted features for further classification. To evaluate the model performance, leave-one-out cross-validation is adopted. Finally, cross-subject classification accuracies of 0.7089, 0.7106 and 0.7361 are achieved by the SVM, KNN and RF classifiers, respectively. The results demonstrate the feasibility of the model by splicing different classifiers’ output probabilities as a portion of the weighted features.
This research studied violence detection from less than 6-second ECG signals. Features were calculated based on the Bivariate Empirical Mode Decomposition (BEMD) and the Recurrence Quantification Analysis (RQA) applied to ECG signals from violence simulation in a primary school, involving 12 pupils from two grades. The feature sets were fed to a kNN classifier and tested using 10-fold cross validation and leave-one-subject-out (LOSO) validation in subject-dependent and subject-independent training models respectively. Features from BEMD outperformed the ones from RQA in both 10-fold cross validation, i.e. 88% vs. 73% (2nd grade pupils) and 87% vs. 81% (5th grade pupils), and LOSO validation, i.e. 77% vs. 75% (2nd grade pupils) and 80% vs. 76% (5th grade pupils), but have larger variation than the ones from RQA in both validations. Average performances for subject-specific system in 10-fold cross validation were 100% vs. 93% (2nd grade pupils) and 100% vs. 97% (5th grade pupils) for features from the BEMD and the RQA respectively. The results indicate that ECG signals as short as 6 seconds can be used successfully to detect violent events using subject-specific classifiers.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.