2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2018
DOI: 10.1109/icassp.2018.8462207
|View full text |Cite
|
Sign up to set email alerts
|

Eeg-Based Video Identification Using Graph Signal Modeling and Graph Convolutional Neural Network

Abstract: This paper proposes a novel graph signal-based deep learning method for electroencephalography (EEG) and its application to EEG-based video identification. We present new methods to effectively represent EEG data as signals on graphs, and learn them using graph convolutional neural networks. Experimental results for video identification using EEG responses obtained while watching videos show the effectiveness of the proposed approach in comparison to existing methods. Effective schemes for graph signal represe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
43
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 47 publications
(48 citation statements)
references
References 21 publications
0
43
0
Order By: Relevance
“…Brain signals provide comprehensive information regarding the mental state of a human subject. Jang et al [ 83 ] proposed the first method to apply deep learning on graph signals to EEG-based visual stimulus identification. The model converts the EEG into graph signals with appropriate graph structures and signal features as input to GCNs to identify the visual stimulus watched by a human subject.…”
Section: Case Studies Of Gnn For Medical Diagnosis and Analysismentioning
confidence: 99%
“…Brain signals provide comprehensive information regarding the mental state of a human subject. Jang et al [ 83 ] proposed the first method to apply deep learning on graph signals to EEG-based visual stimulus identification. The model converts the EEG into graph signals with appropriate graph structures and signal features as input to GCNs to identify the visual stimulus watched by a human subject.…”
Section: Case Studies Of Gnn For Medical Diagnosis and Analysismentioning
confidence: 99%
“…GCNN is an effective method to extract features from discrete spatial signals [127, 128], which can be used to explore the spatial connection of multi‐dimensional EEG signals for emotion recognition [118]. Jang et al [117] utilized GCNN for emotion recognition based on DEAP database. First of all, intra‐band graphs for each electrode were created by calculating the power and entropy of Delta (0–3 Hz), Theta (4–7 Hz), low Alpha (8–10 Hz), high Alpha (10–12 Hz), low Beta (13–16 Hz), middle Beta (17–20 Hz), and high Beta (21–29 Hz) while considering the relationship between electrodes.…”
Section: Eeg‐based Emotion Classifiersmentioning
confidence: 99%
“…GCNN is an effective method to extract features from discrete spatial signals [127,128], which can be used to explore the spatial connection of multi-dimensional EEG signals for emotion recognition [118]. Jang et al [117]…”
Section: Graph Convolutional Neural Networkmentioning
confidence: 99%
“…Quantitative Results: We compare with several baselines on the ZuCo 2.0 dataset and report the average values of F1 score and the accuracy, respectively, in Table 1. Unimodal baselines include methods that process EEG and EM independently, either using recurrent neural networks such as LSTMs [16] or graph convolutional networks (GCNs) similar to [21]. In contrast, multimodal baselines perform late fusion to process both EEG as well as EM signals.…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…Our model generates sparser graphs with average node degree and total number of edges of 2.58 and 1688, respectively. This is a significant reduction in number of parameters in the best performing graph structure obtained by [17] (average node degree=8 and total edges = 3524). Qualitative Results: We analyze the eye-movement across words and the EEG signals for one particular reading sequence in Figure 2.…”
Section: Experiments and Resultsmentioning
confidence: 99%