2022
DOI: 10.1007/s11571-022-09890-3
|View full text |Cite
|
Sign up to set email alerts
|

Feature hypergraph representation learning on spatial-temporal correlations for EEG emotion recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 10 publications
(7 citation statements)
references
References 24 publications
0
5
0
Order By: Relevance
“…In addition, Figure 5 shows the confusion matrix of Bi-ViTNet on the SEED datasets. The results show that for Bi-ViTNet, neutral emotions are easier to identify than negative emotions and positive emotions [22]. Table 3 shows the performance of all models on the SEED-IV dataset.…”
Section: E Results Analysis and Comparisonmentioning
confidence: 99%
See 1 more Smart Citation
“…In addition, Figure 5 shows the confusion matrix of Bi-ViTNet on the SEED datasets. The results show that for Bi-ViTNet, neutral emotions are easier to identify than negative emotions and positive emotions [22]. Table 3 shows the performance of all models on the SEED-IV dataset.…”
Section: E Results Analysis and Comparisonmentioning
confidence: 99%
“…Non-physiological signals, including speech, posture, and facial expression [21], are external manifestations of human emotions. Physiological signals, corresponding to the physiological reactions caused by emotions, such as eye electricity, ECG, EMG, and EEG, are human recessive emotional expressions [22]. Non-physiological signals, such as facial expressions and speech, are limited in their ability to reliably reflect an individual's true emotional state, as humans may conceal their emotions through masking their facial expression and voice.…”
Section: Introductionmentioning
confidence: 99%
“…For instance, Zhang et al (2019) introduced a graph-based hierarchical model that classifies motor intentions based on the relationships between EEG signals and their spatial information. Li et al (2023) proposed a spatial-temporal hypergraph convolutional network (STHGCN) to capture higherorder relationships in EEG emotion recognition, achieved leading results on the SEED and SEED-IV datasets. Recently, Wagh and Varatharajah (2020) employed graph convolutional neural networks (GCNN) for the classification of epilepsy and normal data, achieving an AUC of 0.90.…”
Section: Related Workmentioning
confidence: 99%
“…Based on the H, D v and D e generated during the hypergraph construction process, as well as the subject features input f Conv−LSTM ∈ R M×S×Num1 and f PSD ∈ R M×S×Num2 , where Num1 N × t represented the feature dimension of the spatiotemporal convolution branch and Num2 N × 6 represented the feature dimension of the PSD branch, hypergraph convolutions were conducted for each branch (Li et al, 2023), which were defined as Eq. ( 11).…”
Section: Hypergraph Convolutionmentioning
confidence: 99%
“…Lu et al [19] proposed a time-frequency domain feature, which focuses on calculating the differential entropy (DE) feature at five different frequency band frequencies. Li et al [20] proposed a space-time hypergraph convolutional network to explore spatial and temporal correlations in specific emotional states, and found that spatial domain information can effectively improve the accuracy of emotion recognition.…”
Section: Introductionmentioning
confidence: 99%