Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
2018
DOI: 10.1016/j.bspc.2017.12.001
|View full text |Cite
|
Sign up to set email alerts
|

A convolutional neural network for sleep stage scoring from raw single-channel EEG

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

5
265
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 357 publications
(294 citation statements)
references
References 20 publications
5
265
0
Order By: Relevance
“…Nu ber of e)a ples 10 −2 10 −1 10 0 10 1 10 2 10 3 10 4 Ratio (e)a ples/ in) datasets with a much greater number of subjects: [132,160,188,149] all used datasets with at least 250 subjects, while [22] and [49] used datasets with 10,000 and 16,000 subjects, respectively. As explained in Section 3.7.4, the untapped potential of DL-EEG might reside in combining data coming from many different subjects and/or datasets to train a model that captures common underlying features and generalizes better.…”
Section: Subjectsmentioning
confidence: 99%
See 1 more Smart Citation
“…Nu ber of e)a ples 10 −2 10 −1 10 0 10 1 10 2 10 3 10 4 Ratio (e)a ples/ in) datasets with a much greater number of subjects: [132,160,188,149] all used datasets with at least 250 subjects, while [22] and [49] used datasets with 10,000 and 16,000 subjects, respectively. As explained in Section 3.7.4, the untapped potential of DL-EEG might reside in combining data coming from many different subjects and/or datasets to train a model that captures common underlying features and generalizes better.…”
Section: Subjectsmentioning
confidence: 99%
“…In [183], EEG segments from the interictal class were split into smaller subgroups of equal size to the preictal class. In [160], cost-sensitive learning and oversampling were used to solve the class imbalance problem for sleep staging but the overall performance using these approaches did not improve. In [144], the authors randomly replicated subjects from the minority class to balance classes.…”
Section: Data Augmentationmentioning
confidence: 99%
“…The Sleep-EDF database, though widely employed in quantitative experiments, lacks a standard test bench for fair comparison between different ASSC frameworks. Evaluation metrics are susceptible to changes in class distributions, number of epochs used in the evaluation task and also the epoch selection scheme [20]. We can see a 8.8% reduction in the accuracy of [11] in the 6-stage SC-task compared to the paper reported metric, which can be due to an increased number of data epochs and patient independent splitting during evaluation.…”
Section: A Number Of Epochs and Splitmentioning
confidence: 89%
“…A number of sleep health studies have been compiled into datasets that are used in quantitative experiments, i.e. the Sleep Heart Health Study (SHHS) [20], Montreal Archive of Sleep Studies (MASS) [10] and the Physionet Sleep-EDF Database [21]. We use the Physionet Sleep-EDF Expanded Database in our experiments for its data volume and commonality in existing literature [9].…”
Section: Datasetmentioning
confidence: 99%
“…Also, it would be interesting to add memory to the model using recurrent networks, as the classification of some inputs, following the clinical definition, depends as well on the status of the neighbouring epochs. Biswal et al [22] Massachusetts General Hospital, 1000 recordings 0,77 0,81 0,70 0,77 0,83 0,92 Längkvist et al [18] St Vicent's University Hospital, 25 recordings 0,63 0,73 0,44 0,65 0,86 0,80 Sors et al [23] SHHS, 1730 recordings 0,81 0,91 0,43 0,88 0,85 0,85 Supratak et al [21] MASS dataset, 62 recordings 0,80 0,87 0,60 0,90 0,82 0,89 Supratak et al [21] SleepEDF, 20 recordings 0,76 0,85 0,47 0,86 0,85 0,82 Tsinalis et al [19] SleepEDF, 39 recordings 0,71 0,72 0,47 0,85 0,84 0,81 Tsinalis et al [20] SleepEDF, 39 recordings 0,66 0,67 0,44 0,81 0,85 0,76…”
Section: Discussionmentioning
confidence: 99%