2021
DOI: 10.3390/s21093225
|View full text |Cite
|
Sign up to set email alerts
|

The Effects of Individual Differences, Non-Stationarity, and the Importance of Data Partitioning Decisions for Training and Testing of EEG Cross-Participant Models

Abstract: EEG-based deep learning models have trended toward models that are designed to perform classification on any individual (cross-participant models). However, because EEG varies across participants due to non-stationarity and individual differences, certain guidelines must be followed for partitioning data into training, validation, and testing sets, in order for cross-participant models to avoid overestimation of model accuracy. Despite this necessity, the majority of EEG-based cross-participant models have not… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
15
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 23 publications
(15 citation statements)
references
References 42 publications
0
15
0
Order By: Relevance
“…Because of these properties, the only proper way for model validation is the validation on the signals from an unseen subject. Empirical tests show that there is a large difference in the accuracies between validation on the unseen subjects and validation on the unseen parts of the signal [ 212 ]. Reporting of validation with improper methodology can create overexpectation of the model performance, bad generalization on the unseen subjects, and can lead other researchers in the wrong direction.…”
Section: Discussionmentioning
confidence: 99%
“…Because of these properties, the only proper way for model validation is the validation on the signals from an unseen subject. Empirical tests show that there is a large difference in the accuracies between validation on the unseen subjects and validation on the unseen parts of the signal [ 212 ]. Reporting of validation with improper methodology can create overexpectation of the model performance, bad generalization on the unseen subjects, and can lead other researchers in the wrong direction.…”
Section: Discussionmentioning
confidence: 99%
“…The only proper way for model validation is subject-level validation, as it represents the real-life setting in which the data from a new subject are used only for testing the model. Empirical tests conducted in related research showed a large difference in the accuracies between epoch-level validation and subject-level validation [ 60 ].…”
Section: Discussionmentioning
confidence: 99%
“…To be effective in detection across participants, a model must be highly generalizable and resistant to the effects of non-stationarity and individual differences. For training and testing of a cross-participant model, this requires that data from participants used for model training must not be used for model validation or testing [ 20 ]. This is due to the individual differences and non-stationarity that are inherent within EEG data.…”
Section: Methodsmentioning
confidence: 99%
“…These datasets were collected by the 711th Human Performance Wing (HPW) in partnership with the University of Dayton through two different experiments for the purpose of studying event related potentials (ERPs) during a vigilance decrement across various vigilance tasks [ 18 , 19 ]. Models are trained on data from two of the vigilance tasks and only a subset of the participants and then tested using data from a separate vigilance task that the model has not seen, as well as participants that the model has not seen, which is crucial in order to avoid overestimated test accuracies in cross-participant EEG models [ 20 ].…”
Section: Introductionmentioning
confidence: 99%