2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC) 2018
DOI: 10.1109/smc.2018.00186
|View full text |Cite
|
Sign up to set email alerts
|

Cross-Paradigm Pretraining of Convolutional Networks Improves Intracranial EEG Decoding

Abstract: When it comes to the classification of brain signals in real-life applications, the training and the prediction data are often described by different distributions. Furthermore, diverse data sets, e.g., recorded from various subjects or tasks, can even exhibit distinct feature spaces. The fact that data that have to be classified are often only available in small amounts reinforces the need for techniques to generalize learned information, as performances of brain-computer interfaces (BCIs) are enhanced by inc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(6 citation statements)
references
References 35 publications
(39 reference statements)
0
6
0
Order By: Relevance
“…Nevertheless, relatively few pretraining approaches for raw EEG have been developed. Of those that have been developed, they often involve event-related potentials [11] or task data [12] that are very different from resting state data or the use of self-supervised learning [13]. One study of particular interest used a pretrained architecture originally developed in the imaging domain to classify raw EEG data [14].…”
Section: Introductionmentioning
confidence: 99%
“…Nevertheless, relatively few pretraining approaches for raw EEG have been developed. Of those that have been developed, they often involve event-related potentials [11] or task data [12] that are very different from resting state data or the use of self-supervised learning [13]. One study of particular interest used a pretrained architecture originally developed in the imaging domain to classify raw EEG data [14].…”
Section: Introductionmentioning
confidence: 99%
“…Unlike data augmentation though, using pre-training does not mean to create or obtain more examples from the joint distribution P(X, Y). This technique is based on the assumption that if a ML model is trained on a huge, diverse and close to the targeted brainwave task dataset, it will capture extra information from all of these various distributions, can benefit both causal and anti-causal models and achieve stronger BCI generalization as it is demonstrated in [63] for enhanced intracranial EEG decoding or in [64] for improved MI EEG classification (voluntarily engaged and endogenous task).…”
Section: Training Eeg Data 421 Data Scarcitymentioning
confidence: 99%
“…We selected a specific CNN design (Braindecode Deep4 network) [10], as the Deep4 network was already successfully applied to a wide range of EEG and iEEG classification problems [8,10,11,43,44] and also allowed first insights into the internal representations of learned EEG features [33]. In summary, the Deep4 network consists of an input layer, four hidden layers and an output layer (see supplementary figure S2).…”
Section: Decoding Algorithm: Deep Cnnsmentioning
confidence: 99%