2022
DOI: 10.1016/j.patcog.2021.108202
|View full text |Cite
|
Sign up to set email alerts
|

Graph variational auto-encoder for deriving EEG-based graph embedding

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 24 publications
(10 citation statements)
references
References 24 publications
0
7
0
Order By: Relevance
“…To classify mental arithmetic vs. rest, the stacked long-short term memory (LSTM) was applied to raw 1D EEG signals, resulting in an accuracy of 93.59% in the study by Ganguly et al (2020). Behrouzi and Hatzinakos, (2022) elaborated the benefits of deep learningbased graph variational auto-encoder on 2D representation to detect the task of mental arithmetic attaining a peak mean performance of 95%. Statistical features represented in 1D were input to k-NN for EEG baseline classification (eyes open vs. eyes closed) by Gopan et al (2016b) and a peak mean accuracy of 77.92% was obtained.…”
Section: Introductionmentioning
confidence: 99%
“…To classify mental arithmetic vs. rest, the stacked long-short term memory (LSTM) was applied to raw 1D EEG signals, resulting in an accuracy of 93.59% in the study by Ganguly et al (2020). Behrouzi and Hatzinakos, (2022) elaborated the benefits of deep learningbased graph variational auto-encoder on 2D representation to detect the task of mental arithmetic attaining a peak mean performance of 95%. Statistical features represented in 1D were input to k-NN for EEG baseline classification (eyes open vs. eyes closed) by Gopan et al (2016b) and a peak mean accuracy of 77.92% was obtained.…”
Section: Introductionmentioning
confidence: 99%
“…A Gaussian distribution is often used in practical implementations, so that the loss function can be written as: , where the regularization term pushes the distribution generated by the latent representations to approach a standard normal distribution. Variational AEs have been widely used for learning representations from biosignals to enhance biometric recognition performance [ 114 , 128 , 129 , 130 ]. In these studies, the variational AEs were integrated with other constructs such as autoregressive layers [ 114 ] and graph neural networks [ 128 ] to capture dynamics from EEG timeseries for diverse classification tasks [ 114 ] and to learn graph embedding from EEG functional connectivity input [ 128 ].…”
Section: Representation Learning In Cognitive Biometricsmentioning
confidence: 99%
“…Variational AEs have been widely used for learning representations from biosignals to enhance biometric recognition performance [ 114 , 128 , 129 , 130 ]. In these studies, the variational AEs were integrated with other constructs such as autoregressive layers [ 114 ] and graph neural networks [ 128 ] to capture dynamics from EEG timeseries for diverse classification tasks [ 114 ] and to learn graph embedding from EEG functional connectivity input [ 128 ]. Latent representations learned by variational AEs also show promising results in EEG-based emotion recognition [ 130 ] and ERP-based driver–vehicle interface systems [ 129 ].…”
Section: Representation Learning In Cognitive Biometricsmentioning
confidence: 99%
“…In previous studies, EEG-FC has been shown to have characterizing properties for different individuals [ 8 ]. Commonly used FC indicators in identity recognition are Pearson’s correlation coefficient ( COR ), coherence ( COH ) [ 9 , 10 , 11 ], phase lock value ( PLV ) [ 4 , 12 , 13 , 14 , 15 , 16 ], mutual information index ( MI ) [ 9 ], and Granger causality index (GC) [ 8 ], etc. These features are often used in isolation, which limits the accuracy improvement.…”
Section: Introductionmentioning
confidence: 99%