2021
DOI: 10.21203/rs.3.rs-1085276/v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

GTSception: A Deep Learning EEG Emotion Recognition Model Based on Fusion of Global, Time Domain and Frequency Domain Feature Extraction

Abstract: With the rapid development of deep learning in recent years, automatic electroencephalography (EEG) emotion recognition has been widely concerned. At present, most deep learning methods do not normalize EEG data properly and do not fully extract the features of time and frequency domain, which will affect the accuracy of EEG emotion recognition. To solve these problems, we propose GTScepeion, a deep learning EEG emotion recognition model. In pre-processing, the EEG time slicing data including channels were pre… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 34 publications
0
1
0
Order By: Relevance
“…Many authors preferred using time-domain features like Wang et al created two groups of 10 people and labeled them as mediated and non-mediated to track the properties of EEG signals of 250 Hz of frequency [19]. Zaho et al presented GTScepeion as an advanced deep learning model for EEG emotion recognition [20]. Two varieties of spatial convolution kernels are incorporated, with a specific focus on highlighting disparities between brain hemispheres for spatial feature extraction.…”
Section: Introductionmentioning
confidence: 99%
“…Many authors preferred using time-domain features like Wang et al created two groups of 10 people and labeled them as mediated and non-mediated to track the properties of EEG signals of 250 Hz of frequency [19]. Zaho et al presented GTScepeion as an advanced deep learning model for EEG emotion recognition [20]. Two varieties of spatial convolution kernels are incorporated, with a specific focus on highlighting disparities between brain hemispheres for spatial feature extraction.…”
Section: Introductionmentioning
confidence: 99%