2020
DOI: 10.1016/j.comcom.2020.02.051
|View full text |Cite
|
Sign up to set email alerts
|

Emotion recognition from spatiotemporal EEG representations with hybrid convolutional recurrent neural networks via wearable multi-channel headset

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
31
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 59 publications
(38 citation statements)
references
References 8 publications
0
31
0
Order By: Relevance
“…The final accuracy reported for their multi-column model for the classification of valence and arousal is reported to be 90% and 90%, respectively. Chen et al [38] used parallel hybrid convolutional recurrent neural networks to classify the binary emotions of EEG signals. The DEAP database was also used by these researchers.…”
Section: Introductionmentioning
confidence: 99%
“…The final accuracy reported for their multi-column model for the classification of valence and arousal is reported to be 90% and 90%, respectively. Chen et al [38] used parallel hybrid convolutional recurrent neural networks to classify the binary emotions of EEG signals. The DEAP database was also used by these researchers.…”
Section: Introductionmentioning
confidence: 99%
“…Similarly, Dai and colleagues [ 41 ] used a time-frequency domain representation (Spectrogram image) of the EEG obtained via the STFT. In Jingxia and colleagues [ 42 ], frequency-domain features were also used. They extracted 64 Power Spectral Density (PSD) features by using Hamming window with a width of 0.5 s in 1–47 Hz frequency.…”
Section: Resultsmentioning
confidence: 99%
“…Tao et al proposed an attention-based convolutional recurrent neural network (ACRNN) to extract discriminative features from EEG signals [53]. Chen et al transformed 1D chain-like EEG vector into a 2D meshlike matrix sequence [54]. The 2D matrix sequence was divided into segments containing equal time points by using sliding window, and sent to both cascaded and parallel hybrid convolutional recurrent neural network for training.…”
Section: B Deep Learning Approaches For Emotion Recognitionmentioning
confidence: 99%