2022
DOI: 10.1155/2022/5214195
|View full text |Cite|
|
Sign up to set email alerts
|

[Retracted] Classification of Bioinformatics EEG Data Signals to Identify Depressed Brain State Using CNN Model

Abstract: Patients suffering from severe depression may be precisely assessed using online EEG categorization and their progress tracked over time, minimizing the risk of danger and suicide. Online EEG categorization systems, on the other hand, suffer additional challenges in the absence of empirical oversight. A lack of effective decoupling between brain regions and neural networks occurs during brain disease attacks, resulting in EEG data with poor signal intensity, high noise, and nonstationary characteristics. CNN e… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(9 citation statements)
references
References 19 publications
(24 reference statements)
0
9
0
Order By: Relevance
“…Relative to previous studies performing automated diagnosis of MDD using raw EEG data with robust cross-validation techniques, our model obtained higher performance [78], and relative to studies using extracted features with traditional machine learning approaches and robust cross-validation techniques, our model obtained comparable or higher performance [27]. There were some studies that obtained higher test performance than our model using either raw EEG data [80], [81] or extracted features [3], [21], [24], [26]. However, it appears based on the descriptions of their cross-validation approaches that those studies allowed data from the same study participants to leak across training, validation, and test sets within the same folds.…”
Section: Discussionmentioning
confidence: 72%
“…Relative to previous studies performing automated diagnosis of MDD using raw EEG data with robust cross-validation techniques, our model obtained higher performance [78], and relative to studies using extracted features with traditional machine learning approaches and robust cross-validation techniques, our model obtained comparable or higher performance [27]. There were some studies that obtained higher test performance than our model using either raw EEG data [80], [81] or extracted features [3], [21], [24], [26]. However, it appears based on the descriptions of their cross-validation approaches that those studies allowed data from the same study participants to leak across training, validation, and test sets within the same folds.…”
Section: Discussionmentioning
confidence: 72%
“…Additionally, relative to [27], a study that used a robust cross-validation approach and manually engineered features, we also demonstrated an improvement. A number of studies using raw EEG [28], [29] and manually engineered features [30]–[33] did obtain higher classification performance than our approach. However, their cross-validation approaches are either not clearly described, or it seems that they employed cross-validation approaches that allowed data leakage between the training and test sets.…”
Section: Resultsmentioning
confidence: 99%
“…However, there was not a significant difference between M2 and M3 performance, which suggested that M3 was able to learn linearly separable features without dropping performance below that of M2. Many studies trained models that outperformed all three of our architectures for MDD classification [25,31,57,[64][65][66]. However, most of those studies used more traditional cross-validation approaches that allowed samples from the same recording to be distributed across training, validation, and test sets.…”
Section: M1-m3: Model Performance Analysismentioning
confidence: 99%