The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2021
DOI: 10.3390/s21051792
|View full text |Cite
|
Sign up to set email alerts
|

Learning Subject-Generalized Topographical EEG Embeddings Using Deep Variational Autoencoders and Domain-Adversarial Regularization

Abstract: Two of the biggest challenges in building models for detecting emotions from electroencephalography (EEG) devices are the relatively small amount of labeled samples and the strong variability of signal feature distributions between different subjects. In this study, we propose a context-generalized model that tackles the data constraints and subject variability simultaneously using a deep neural network architecture optimized for normally distributed subject-independent feature embeddings. Variational autoenco… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
9

Relationship

1
8

Authors

Journals

citations
Cited by 11 publications
(5 citation statements)
references
References 51 publications
0
5
0
Order By: Relevance
“…Numerous ConvNets with end-to-end architecture have been proposed to learn informative features and deal with the variation of noises in EEG analysis automatically and efficiently. For instance, attention classification using Shallow ConvNet and long short-term memory (LSTM) network on a three-back task [ 30 ], and emotion classification using subject-invariant bilateral variational domain adversarial neural network [ 31 ]. However, EEG features need to be extracted into various representations that improve the learning effect, especially for EEG analysis.…”
Section: Discussionmentioning
confidence: 99%
“…Numerous ConvNets with end-to-end architecture have been proposed to learn informative features and deal with the variation of noises in EEG analysis automatically and efficiently. For instance, attention classification using Shallow ConvNet and long short-term memory (LSTM) network on a three-back task [ 30 ], and emotion classification using subject-invariant bilateral variational domain adversarial neural network [ 31 ]. However, EEG features need to be extracted into various representations that improve the learning effect, especially for EEG analysis.…”
Section: Discussionmentioning
confidence: 99%
“…For these reasons, newer studies tried to overcome the Dataset Shift problem in EEG-based BCIs [32]. In particular, Domain Adaptation (DA) strategies try to construct models able to generalize on unseen data exploiting knowledge given by available unlabelled data.…”
Section: A the Dataset Shift Problemmentioning
confidence: 99%
“…VAE often employs Kullback-Leibler (KL) divergence, which is a measure of how the probability distribution of the latent space differs from that generated by sampling data from it [20]. A special version of the VAE was proposed in [21], focused on learning a generalised model of emotion by concurrently optimizing the goal or learning normally distributed and subjectindependent feature representations, via the use of spectral topography data. The ultimate objective was to maximize dataset inter-compatibility, improve robustness to localized electrode noise, and provide a more generally applicable method within neuroscience.…”
Section: B Variational Autoencoders For Feature Representationmentioning
confidence: 99%