Electroencephalography (EEG) is another mode for performing Person Identification (PI). Due to the nature of the EEG signals, EEG-based PI is typically done while the person is performing some kind of mental task, such as motor control. However, few works have considered EEG-based PI while the person is in different mental states (affective EEG). The aim of this paper is to improve the performance of affective EEGbased PI using a deep learning approach. We proposed a cascade of deep learning using a combination of Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs). CNNs are used to handle the spatial information from the EEG while RNNs extract the temporal information. We evaluated two types of RNNs, namely, Long Short-Term Memory (CNN-LSTM) and Gated Recurrent Unit (CNN-GRU). The proposed method is evaluated on the state-of-the-art affective dataset DEAP. The results indicate that CNN-GRU and CNN-LSTM can perform PI from different affective states and reach up to 99.90-100% mean Correct Recognition Rate (CRR), significantly outperforming a support vector machine (SVM) baseline system that uses power spectral density (PSD) features. Notably, the 100% mean CRR comes from only 40 subjects in DEAP dataset. To reduce the number of EEG electrodes from thirty-two to five for more practical applications, the frontal region gives the best results reaching up to 99.17% CRR (from CNN-GRU). Amongst the two deep learning models, we find CNN-GRU to slightly outperform CNN-LSTM, while having faster training time. Furthermore, CNN-GRU overcomes the influence of affective states in EEG-Based PI reported in the previous works.
The process of recording Electroencephalography (EEG) signals is onerous and requires massive storage to store signals at an applicable frequency rate. In this work, we propose the Event-Related Potential Encoder Network (ERPENet); a multi-task autoencoder-based model, that can be applied to any ERP-related tasks. The strength of ERPENet lies in its capability to handle various kinds of ERP datasets and its robustness across multiple recording setups, enabling joint training across datasets. ERPENet incorporates Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM), in an autoencoder setup, which tries to simultaneously compress the input EEG signal and extract related P300 features into a latent vector. Here, we can infer the process for generating the latent vector as universal joint feature extraction. The network also includes a classification part for attended and unattended events classification as an auxiliary task. We experimented on six different P300 datasets. The results show that the latent vector exhibits better compression capability than the previous state-of-the-art semi-supervised autoencoder model. For attended and unattended events classification, pre-trained weights are adopted as initial weights and tested on unseen P300 datasets to evaluate the adaptability of the model, which shortens the training process as compared to using random Xavier weight initialization. At the compression rate of 6.84, the classification accuracy outperforms conventional P300 classification models: XdawnLDA, DeepConvNet, and EEGNet achieving 79.37% -88.52% classification accuracy depending on the dataset. INDEX TERMS Electroencephalography, P300, Deep learning, Pre-trained model, Spatiotemporal neural networks, multi-task autoencoder arXiv:1808.06541v2 [eess.SP]
Dramatic raising of Deep Learning (DL) approach and its capability in biomedical applications lead us to explore the advantages of using DL for sleep Apnea-Hypopnea severity classification. To reduce the complexity of clinical diagnosis using Polysomnography (PSG), which is multiple sensing platform, we incorporates our proposed DL scheme into one single Airflow (AF) sensing signal (subset of PSG). Seventeen features have been extracted from AF and then fed into Deep Neural Networks to classify in two studies. First, we proposed a binary classifications which use the cutoff indices at AHI = 5, 15 and 30 events/hour. Second, the multiple Sleep Apnea-Hypopnea Syndrome (SAHS) severity classification was proposed to classify patients into 4 groups including no SAHS, mild SAHS, moderate SAHS, and severe SAHS. For methods evaluation, we used a higher number of patients than related works to accommodate more diversity which includes 520 AF records obtained from the MrOS sleep study (Visit 2) database. We then applied the 10-fold cross-validation technique to get the accuracy, sensitivity and specificity. Moreover, we compared the results from our main classifier with other two approaches which were used in previous researches including the Support Vector Machine (SVM) and the Adaboost-Classification and Regression Trees (AB-CART). From the binary classification, our proposed method provides significantly higher performance than other two approaches with the accuracy of 83.46%, 85.39% and 92.69% in each cutoff, respectively. For the multiclass classification, it also returns a highest accuracy of all approaches with 63.70%.Index Terms-sleep apnea-hypopnea syndrome (SAHS) severity classification, deep neural networks, machine learning, one single airflow sensing signals, feature extraction from airflow signals.
The technological advancement in wireless health monitoring allows the development of light-weight wrist-worn wearable devices to be equipped with different sensors. Although the equipped photoplethysmography (PPG) sensors can measure the changes in the blood volume directly through the contact with skin, the motion artifact (MA) is possible to occur during an intense exercise. In this study, we attempted to perform heart rate (HR) estimation by proposing a post-calibration approach during the three possible states of average daily activity (resting, sleeping, and intense treadmill activity states) in 29 participants (130 minutes/person) on four popular wearable devices: Fitbit Charge HR, Apple Watch Series 4, TicWatch Pro, and Empatica E4. In comparison to the HR provided by Fitbit Charge HR (HR Fitbit ) with the highest error of 3.26 ± 0.34 bpm in resting, 2.33 ± 0.23 bpm in sleeping, 9.53 ± 1.47 bpm in intense treadmill activity states, and 5.02 ± 0.64 bpm in all states combined, our improving HR estimation model with rolling windows as feature reduced the mean absolute error (MAE) for 33.44% in resting, 15.88% in sleeping, 9.55% in intense treadmill activity states, and 18.73% in all states combined. Four machine learning (ML) algorithms (support vector regression (SVR), random forest (RF), Gaussian process (GP), and artificial neural network (ANN)) were formulated and trained with the tuned hyperparameters. This demonstrates the feasibility of our proposed methods in order to correct and provide HR monitoring post-calibrated with high accuracy, raising further awareness of individual fitness in the daily application.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.