2023
DOI: 10.1016/j.bspc.2023.104806
|View full text |Cite
|
Sign up to set email alerts
|

Deep time-frequency features and semi-supervised dimension reduction for subject-independent emotion recognition from multi-channel EEG signals

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
8
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 19 publications
(11 citation statements)
references
References 51 publications
0
8
0
Order By: Relevance
“…They achieved 85.45% and 85.65% for valence and arousal. However, Xing et al 24 employed SAE (Stack AutoEncoder) with LSTM-RNN to fix the linear EEG signals problem. The accuracies obtained were 81.10% and 74.38% in valence and arousal, respectively.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…They achieved 85.45% and 85.65% for valence and arousal. However, Xing et al 24 employed SAE (Stack AutoEncoder) with LSTM-RNN to fix the linear EEG signals problem. The accuracies obtained were 81.10% and 74.38% in valence and arousal, respectively.…”
Section: Discussionmentioning
confidence: 99%
“…Furthermore, scientists applied three distinct methods 24 to combine data from various channels and the Fusion after deep feature reduction (FaDFR) method, which combines reduced deep time–frequency features from EEG channels with Inception-V3 25 CNN for deep feature extraction and SVM for classification, produced superior results. The results demonstrated 88.6% accuracy on the DEAP dataset and 94.58% accuracy on the SEED dataset 26 .…”
Section: Related Workmentioning
confidence: 99%
“…Therefore, the CDBA model proposed in this paper is the most suitable for emotion prediction compared with other models. Physiological signals like EEG have the advantages of universality, spontaneity, and difficulty in camouflage ( Zali-Vargahan et al, 2023 ), and human cognitive behavior and mental activity have a strong correlation with EEG signals. Therefore, physiological signals are a good choice to recognize emotions ( Liu et al, 2020 ).…”
Section: Discussionmentioning
confidence: 99%
“…Liu et al ( 2023 ) combined the attention mechanism and pre-trianed convolutional capsule network to extract the spatial information from the original emotional EEG signals. Zali-Vargahan et al ( 2023 ) introduced CNN to extract the deep time-frequency features and employed several machine classifiers such as decision tree to classify different emotional states, where the average accuracy of 94.58% had been achieved in SEED. Ma et al ( 2023 ) developed the transfer learning methods to reduce the distribution differences of emotional EEG signals between different subjects, thereby enabling more robust cross-subject emotion recognition.…”
Section: Introductionmentioning
confidence: 99%