2020
DOI: 10.1109/access.2020.2971600
|View full text |Cite
|
Sign up to set email alerts
|

Learning Invariant Representations From EEG via Adversarial Inference

Abstract: Discovering and exploiting shared, invariant neural activity in electroencephalogram (EEG) based classification tasks is of significant interest for generalizability of decoding models across subjects or EEG recording sessions. While deep neural networks are recently emerging as generic EEG feature extractors, this transfer learning aspect usually relies on the prior assumption that deep networks naturally behave as subject-(or session-) invariant EEG feature extractors. We propose a further step towards invar… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
58
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 77 publications
(64 citation statements)
references
References 46 publications
0
58
0
Order By: Relevance
“…Interpretability is an important aspect of machine learning for clinical applications. A number of previous works on EEG analysis such as [5] [14] [38] [39] incorporates this in their model development and provides insights towards interpretability in the model, revealing the relevance of each input unit (either channels of the EEG device or pixels of the spatial-temporal EEG data matrix) with the classification decision. Similar interpretation analysis is performed in our work adopting the occlusion based approach [37], which reveals the different levels of reliance on the individual channels before and after the transfer process.…”
Section: Discussionmentioning
confidence: 99%
“…Interpretability is an important aspect of machine learning for clinical applications. A number of previous works on EEG analysis such as [5] [14] [38] [39] incorporates this in their model development and provides insights towards interpretability in the model, revealing the relevance of each input unit (either channels of the EEG device or pixels of the spatial-temporal EEG data matrix) with the classification decision. Similar interpretation analysis is performed in our work adopting the occlusion based approach [37], which reveals the different levels of reliance on the individual channels before and after the transfer process.…”
Section: Discussionmentioning
confidence: 99%
“…The question here is why TL can be considered as an effort to reduce cost/time-consuming calibration. Most studies assumed that the subject-invariant feature space can be directly applied with zero or short-calibrations for new subjects' EEG data (Jeon et al, 2020;Özdenizci et al, 2020). Contrary to explicit TL-based methods, implicit TL-based approaches follow the hypothesis that their method can train domain-invariant feature spaces on the basis of only their internal architectures without explicitly minimizing the discrepancy.…”
Section: Approaches In Transfer Learningmentioning
confidence: 99%
“…In recent years, efforts have been made to take advantage of other real EEG samples (i.e., from a session or a subject) to train deep neural networks that decode EEG samples, thereby mitigating the data insufficiency problem (Chai et al, 2016 ; Andreotti et al, 2018 ; Fahimi et al, 2019 ; Özdenizci et al, 2020 ). These studies known as TL have focused on transferring knowledge from one dataset to another one.…”
Section: Advances In Transfer Learningmentioning
confidence: 99%
“…This section reviews the related works on transfer learning techniques for EEG-based BCI. Although some research aimed Transfer Learning for BCI Targeted Subject [10,19,20,21,22] Calibration Data [12,23,24,25,26,27] [ 28,11,29,30,31,32] [ 5,9], Proposed at intra-subject-cross session transfer [34], this study will mainly focus on the review of inter-subject transfer research. Fig.…”
Section: Related Workmentioning
confidence: 99%
“…Tu and Sun proposed a method that can extract both robust CSFs for all the subjects and adaptive CSFs for a single subject [10]. Recently,Özdenizci et al aimed at discovering the subjectindependent features across subjects by using convolutional neural networks and adversarial training [21]. Zhang et al and Jeon et al were developing methods for learning the invariant data representations with deep learning approach [29,31].…”
Section: Related Workmentioning
confidence: 99%