2021 43rd Annual International Conference of the IEEE Engineering in Medicine &Amp; Biology Society (EMBC) 2021
DOI: 10.1109/embc46164.2021.9630210
|View full text |Cite
|
Sign up to set email alerts
|

Gated Transformer for Decoding Human Brain EEG Signals

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
26
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 26 publications
(26 citation statements)
references
References 16 publications
0
26
0
Order By: Relevance
“…While some studies showed that the Transformer model outperformed LSTM models in ASR 272829 , the studies on EEG signals have also successfully applied Transformer models for BCI 30313233343536 . Notably, studies by Tao et al 31 and Siddhad et al 32 showed that Transformer models outperformed LSTM models, even when they were trained on a small dataset, that is, of less than 10h. This dataset size is considerably smaller than that typically employed in ASR, which usually comprises over 1,000h of speech.…”
Section: Discussionmentioning
confidence: 99%
“…While some studies showed that the Transformer model outperformed LSTM models in ASR 272829 , the studies on EEG signals have also successfully applied Transformer models for BCI 30313233343536 . Notably, studies by Tao et al 31 and Siddhad et al 32 showed that Transformer models outperformed LSTM models, even when they were trained on a small dataset, that is, of less than 10h. This dataset size is considerably smaller than that typically employed in ASR, which usually comprises over 1,000h of speech.…”
Section: Discussionmentioning
confidence: 99%
“…Vision Transformer (ViT) and its variants are highly capable of capturing global contextual information and longrange dependence in attention-based modeling of a plethora of modalities, including EEG spectra (Bagchi and Bathula 2022;Tao et al 2021), but are not capable enough to learn high-frequency components that are crucial for fine-grained information extraction and classification (Wang et al 2022;Bai et al 2022). Recent years have witness significant advance in examination of ViT from spectral domain, which contributes to resolving important gaps such as attention collapse (Wang et al 2022).…”
Section: Vision Transformermentioning
confidence: 99%
“…Seizure dynamics is characterized by high-frequency outbursts of epileptiform abnormalities. Recent advances in attention-based deep neural networks, especially Transformers, have prompted augmented representational learning for EEG classification tasks (Bagchi and Bathula 2022;Siddhad et al 2022;Tao et al 2021;Sun, Xie, and Zhou 2021). However, the problem of loss of relevant clinical biomarkers of irregularly altered EEG signals has not been explicitly addressed.…”
Section: Introductionmentioning
confidence: 99%
“…They divided EEG signals into different brain regions according to Power Spectral Density (PSD) [206] and realized the hierarchical learning of spatial information from the electrode level to the brain region level using parallel selfattention. Tao et al [202] established a Gated Recurrent Unit Transformer (GRUGate Transformer) to acquire the longterm dependence of EEG signals. The gating mechanism stabilized the GRUGate training process and was assessed in EEG datasets for human brain-visual and motor imagery.…”
Section: Eeg Processingmentioning
confidence: 99%