2018
DOI: 10.1088/1741-2552/aae5d8
|View full text |Cite
|
Sign up to set email alerts
|

Compact convolutional neural networks for classification of asynchronous steady-state visual evoked potentials

Abstract: Objective. Steady-State Visual Evoked Potentials (SSVEPs) are neural oscillations from the parietal and occipital regions of the brain that are evoked from flickering visual stimuli. SSVEPs are robust signals measurable in the electroencephalogram (EEG) and are commonly used in brain-computer interfaces (BCIs). However, methods for high-accuracy decoding of SSVEPs usually require hand-crafted approaches that leverage domain-specific knowledge of the stimulus signals, such as specific temporal frequencies in th… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
115
0
4

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
2
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 155 publications
(122 citation statements)
references
References 71 publications
(124 reference statements)
3
115
0
4
Order By: Relevance
“…For deeper layers, however, the hierarchical nature of neural networks means it is much harder to understand what a weight is applied to. The analysis of model activations was used in multiple studies [212,194,87,83,208,167,154,109]. This kind of inspection method usually involves visualizing the activations of the trained model over multiple examples, and thus inferring how different parts of the network react to known inputs.…”
Section: Inspection Of Trained Modelsmentioning
confidence: 99%
See 1 more Smart Citation
“…For deeper layers, however, the hierarchical nature of neural networks means it is much harder to understand what a weight is applied to. The analysis of model activations was used in multiple studies [212,194,87,83,208,167,154,109]. This kind of inspection method usually involves visualizing the activations of the trained model over multiple examples, and thus inferring how different parts of the network react to known inputs.…”
Section: Inspection Of Trained Modelsmentioning
confidence: 99%
“…Occlusion sensitivity techniques [92,26,175] use a similar idea, by which the decisions of the network when different parts of the input are occluded are analyzed. [135,211,86,34,87,200,182,122,170,228,164,109,204,85,25] Analysis of activations [212,194,87,83,208,167,154,109] Input-perturbation network-prediction correlation maps [149,191,67,16,150] Generating input to maximize activation [188,144,160,15] Occlusion of input [92,26,175] Several studies used backpropagation-based techniques to generate input maps that maximize activations of specific units [188,144,160,15]. These maps can then be used to infer the role of specific neurons, or the kind of input they are sensitive to.…”
Section: Inspection Of Trained Modelsmentioning
confidence: 99%
“…In Experiment B, EEGNet and DeepConvNet were pretrained in the same way as ERPENet. Regardless of the compact size in EEGnet and the claim in [28] that EEGNet could be trained with very limited data, EEGNet did not perform well in BCI-COMP which is the smallest dataset in this evaluation. Table 6 also shows that ERPENet outperforms DeepConvNet, while DeepConvNet has comparable results as in EEGNet, which is consistent with the experiments with P300 dataset in [28].…”
Section: Discussionmentioning
confidence: 78%
“…Then, the ERPENet was adopted as a pre-trained network to the attended and unattended event classification network. The results are compared to the state-of-the-art P300 dimensionality reduction algorithm [26] named Xdawn with Bayesian LDA classification [27] and state-of-the-art in EEG classification deep learning models, EEGNet [28] and DeepConvNet [29]. EEGNet and DeepConvNet were deep learning models, designed for various EEG classification tasks, and yielded state-of-the-art results.…”
Section: Introductionmentioning
confidence: 99%
“…Recent work by [13] has yielded a CNN architecture for EEG (EEGNet) that can learn from relatively small amounts of data, on the order of hundreds of trials per subject. Furthermore, [13,44] showed that EEGNet enabled cross-subject transfer performance equal to or better than conventional approaches for several EEG classification paradigms, both event-related and oscillatory. EEGNet is also the model used to obtain our crossexperiment results described in [26,39,45].…”
Section: Cnns For Neural Decodingmentioning
confidence: 99%