2022
DOI: 10.1109/taffc.2019.2922912
|View full text |Cite
|
Sign up to set email alerts
|

From Regional to Global Brain: A Novel Hierarchical Spatial-Temporal Neural Network Model for EEG Emotion Recognition

Abstract: IEEE In this paper, we propose a novel Electroencephalograph (EEG) emotion recognition method inspired by neuroscience with respect to the brain response to different emotions. The proposed method, denoted by R2G-STNN, consists of spatial and temporal neural network models with regional to global hierarchical feature learning process to learn discriminative spatial-temporal EEG features. To learn the spatial features, a bidirectional long short term memory (BiLSTM) network is adopted to capture the intrinsic s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

4
74
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 145 publications
(87 citation statements)
references
References 46 publications
4
74
0
Order By: Relevance
“…These methods automatically learn feature representation from the data to distinguish among different classes. Li et al [12] used a spatial and temporal deep learning architecture to learn discriminative spatial-temporal EEG features for the detection of emotional states. Hefron et al [13] suggested a novel convolutional recurrent neural model by using multipath subnetworks for a cross-participant EEG-based assessment of cognitive workloads.…”
Section: Introductionmentioning
confidence: 99%
“…These methods automatically learn feature representation from the data to distinguish among different classes. Li et al [12] used a spatial and temporal deep learning architecture to learn discriminative spatial-temporal EEG features for the detection of emotional states. Hefron et al [13] suggested a novel convolutional recurrent neural model by using multipath subnetworks for a cross-participant EEG-based assessment of cognitive workloads.…”
Section: Introductionmentioning
confidence: 99%
“…Method ACC/STD(%) SVM (Suykens and Vandewalle 1999) 83.99 / 09.72 CCA (Thompson 2005) 77.63 / 13.21 DBN (Zheng and Lu 2015) 86.08 / 08.34 GCNN (Defferrard 2016) 87.40 / 09.20 DANN (Ganin et al 2016) 91.36 / 08.30 GRSLR (Li et al 2018c) 87.39 / 08.64 DGCNN (Song et al 2018) 90.40 / 08.49 BiDANN (Li et al 2018b) 92.38 / 07.04 R2G-STNN (Li et al 2019) 93.38 / 05.96 IAG 95.44 / 05.48…”
Section: Experiments Resultsmentioning
confidence: 99%
“…Method ACC/STD(%) SVM (Suykens and Vandewalle 1999) 56.73 / 16.29 KPCA (Schölkopf and Müller 1998) 61.28 / 14.62 TCA (Pan et al 2011) 63.64 / 14.88 TPT (Sangineto et al 2014) 76.31 / 15.89 DANN (Ganin et al 2016) 75.08 / 11.18 DGCNN (Song et al 2018) 79.95 / 09.02 BiDANN (Li et al 2018b) 83.28 / 09.60 BiDANN-S (Li et al 2018d) 84.14 / 06.87 R2G-STNN (Li et al 2019) 84 Table 2, our IAG achieves better classification result, which is 5.04% and 8.04% higher than graph-based methods, i.e., DGCNN and GCNN, respectively. Although they are all graphbased methods, our self-adaptive structure is more effective to characterize the intrinsic relationships between different EEG channels.…”
Section: Experiments Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…They are thus not likely to share common EEG distributions corresponding to the same emotional states, meaning that the performance of a generic machine-learning model will either be compromised or fail for certain individuals. Some related work has explored the negative impact of inter-individual non-stationarity on affective computing (Lin et al, 2010b; Soleymani et al, 2012; Lin and Jung, 2017; Li et al, 2019; Xing et al, 2019). In other words, a subject-independent model (i.e., in which learning has been carried out on the aggregated data of all available individuals) did not exclusively outperform a subject-dependent counterpart due to the increased amount of training data.…”
Section: Introductionmentioning
confidence: 99%