2020
DOI: 10.1109/tnnls.2019.2899781
|View full text |Cite
|
Sign up to set email alerts
|

Automatic Sleep Staging Employing Convolutional Neural Networks and Cortical Connectivity Images

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
21
0
2

Year Published

2022
2022
2024
2024

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 40 publications
(23 citation statements)
references
References 51 publications
0
21
0
2
Order By: Relevance
“…Usually, the numbers of samples for each sleep stage are unbalanced. To date, several methods have been proposed to release this issue, including the classbalanced random sampling [122], data augmentation [130], class-balance training set design [28], and synthetic minority oversampling technique [131].…”
Section: E Sleep Stage Classificationmentioning
confidence: 99%
“…Usually, the numbers of samples for each sleep stage are unbalanced. To date, several methods have been proposed to release this issue, including the classbalanced random sampling [122], data augmentation [130], class-balance training set design [28], and synthetic minority oversampling technique [131].…”
Section: E Sleep Stage Classificationmentioning
confidence: 99%
“…To compare the performance to that of conventional machine learning models, we trained random forest (RF) [38], gradient boost (GB) [39], and support vector machine (SVM) [40] with data including Time and combination of Time and IF. Given that conventional machine learning models are effective when the inputs are extracted features, we decomposed the EEG into five non-overlapping frequency ranges including delta (0.5-4 Hz), theta (4-8 Hz), alpha (8-13 Hz), beta (13)(14)(15)(16)(17)(18)(19)(20)(21)(22)(23)(24)(25)(26)(27)(28)(29)(30), and gamma (30-50 Hz), according to a conventional approach [41]. We extracted the power of each band in each of the 19 channels, namely band power (BP), and obtained a final input vector with a dimension of 90 by concatenating these values.…”
Section: Performance Comparison To Different Modelsmentioning
confidence: 99%
“…We proposed a slight modification of the fully end-to-end classifier based on the addition of frequency domain information to improve our performance. Instead of complex hand-engineered features, including time-frequency distributions [18], [27], entropy [23], and functional connectivity [20], we selected the IF which was simply obtained from time series, to maintain the nature of the end-to-end classifier as much as possible.…”
Section: A End-to-end Classifiermentioning
confidence: 99%
See 1 more Smart Citation
“…Recently, deep neural networks (DNNs) have been developed and have remarkably improved classification performances in various areas of study [22]. A DNN model using convolutional neural network (CNN) has been applied to the classification problem of sleep scoring based on EEG data [23,24] and electrocardiogram (ECG) data [25]. CNN has been, moreover, employed for activity recognition based on wrist worn accelerometer data [26], where CNN outperformed conventional machine learning algorithms such as SVM and LDA.…”
Section: Introductionmentioning
confidence: 99%