Identifying bio-signals based-sleep stages requires time-consuming and tedious labor of skilled clinicians. Deep learning approaches have been introduced in order to challenge the automatic sleep stage classification conundrum. However, the difficulties can be posed in replacing the clinicians with the automatic system due to the differences in many aspects found in individual bio-signals, causing the inconsistency in the performance of the model on every incoming individual. Thus, we aim to explore the feasibility of using a novel approach, capable of assisting the clinicians and lessening the workload. We propose the transfer learning framework, entitled MetaSleepLearner, based on Model Agnostic Meta-Learning (MAML), in order to transfer the acquired sleep staging knowledge from a large dataset to new individual subjects (source code is available at https://github.com/IoBT-VISTEC/MetaSleepLearner). The framework was demonstrated to require the labelling of only a few sleep epochs by the clinicians and allow the remainder to be handled by the system. Layer-wise Relevance Propagation (LRP) was also applied to understand the learning course of our approach. In all acquired datasets, in comparison to the conventional approach, MetaSleepLearner achieved a range of 5.4% to 17.7% improvement with statistical difference in the mean of both approaches. The illustration of the model interpretation after the adaptation to each subject also confirmed that the performance was directed towards reasonable learning. MetaSleepLearner outperformed the conventional approaches as a result from the fine-tuning using the recordings of both healthy subjects and patients. This is the first work that investigated a nonconventional pre-training method, MAML, resulting in a possibility for human-machine collaboration in sleep stage classification and easing the burden of the clinicians in labelling the sleep stages through only several epochs rather than an entire recording.
Dramatic raising of Deep Learning (DL) approach and its capability in biomedical applications lead us to explore the advantages of using DL for sleep Apnea-Hypopnea severity classification. To reduce the complexity of clinical diagnosis using Polysomnography (PSG), which is multiple sensing platform, we incorporates our proposed DL scheme into one single Airflow (AF) sensing signal (subset of PSG). Seventeen features have been extracted from AF and then fed into Deep Neural Networks to classify in two studies. First, we proposed a binary classifications which use the cutoff indices at AHI = 5, 15 and 30 events/hour. Second, the multiple Sleep Apnea-Hypopnea Syndrome (SAHS) severity classification was proposed to classify patients into 4 groups including no SAHS, mild SAHS, moderate SAHS, and severe SAHS. For methods evaluation, we used a higher number of patients than related works to accommodate more diversity which includes 520 AF records obtained from the MrOS sleep study (Visit 2) database. We then applied the 10-fold cross-validation technique to get the accuracy, sensitivity and specificity. Moreover, we compared the results from our main classifier with other two approaches which were used in previous researches including the Support Vector Machine (SVM) and the Adaboost-Classification and Regression Trees (AB-CART). From the binary classification, our proposed method provides significantly higher performance than other two approaches with the accuracy of 83.46%, 85.39% and 92.69% in each cutoff, respectively. For the multiclass classification, it also returns a highest accuracy of all approaches with 63.70%.Index Terms-sleep apnea-hypopnea syndrome (SAHS) severity classification, deep neural networks, machine learning, one single airflow sensing signals, feature extraction from airflow signals.
For several decades, electroencephalography (EEG) has featured as one of the most commonly used tools in emotional state recognition via monitoring of distinctive brain activities. An array of datasets have been generated with the use of diverse emotion-eliciting stimuli and the resulting brainwave responses conventionally captured with high-end EEG devices. However, the applicability of these devices is to some extent limited by practical constraints and may prove difficult to be deployed in highly mobile context omnipresent in everyday happenings. In this study, we evaluate the potential of OpenBCI to bridge this gap by first comparing its performance to research grade EEG system, employing the same algorithms that were applied on benchmark datasets. Moreover, for the purpose of emotion classification, we propose a novel method to facilitate the selection of audio-visual stimuli of high/low valence and arousal. Our setup entailed recruiting 200 healthy volunteers of varying years of age to identify the top 60 affective video clips from a total of 120 candidates through standardized self assessment, genre tags, and unsupervised machine learning. Additional 43 participants were enrolled to watch the pre-selected clips during which emotional EEG brainwaves and peripheral physiological signals were collected. These recordings were analyzed and extracted features fed into a classification model to predict whether the elicited signals were associated with a high or low level of valence and arousal. As it turned out, our prediction accuracies were decidedly comparable to those of previous studies that utilized more costly EEG amplifiers for data acquisition.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.