2021
DOI: 10.3389/fnhum.2021.655840
|View full text |Cite
|
Sign up to set email alerts
|

A Lightweight Multi-Scale Convolutional Neural Network for P300 Decoding: Analysis of Training Strategies and Uncovering of Network Decision

Abstract: Convolutional neural networks (CNNs), which automatically learn features from raw data to approximate functions, are being increasingly applied to the end-to-end analysis of electroencephalographic (EEG) signals, especially for decoding brain states in brain-computer interfaces (BCIs). Nevertheless, CNNs introduce a large number of trainable parameters, may require long training times, and lack in interpretability of learned features. The aim of this study is to propose a CNN design for P300 decoding with emph… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
25
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
2

Relationship

3
6

Authors

Journals

citations
Cited by 21 publications
(25 citation statements)
references
References 42 publications
0
25
0
Order By: Relevance
“…In addition, 10% of trials belonging to the training set were held out to form a validation set devoted to defining a stop criterion for the optimization (see later). All sets were standardized before network training, by using the mean value and standard deviation computed on training examples 46 , 64 , 65 . Furthermore, for each held-out subject (i.e., each cross-validation fold), the parameters ( ) of the decoder were trained using different initializations.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…In addition, 10% of trials belonging to the training set were held out to form a validation set devoted to defining a stop criterion for the optimization (see later). All sets were standardized before network training, by using the mean value and standard deviation computed on training examples 46 , 64 , 65 . Furthermore, for each held-out subject (i.e., each cross-validation fold), the parameters ( ) of the decoder were trained using different initializations.…”
Section: Methodsmentioning
confidence: 99%
“…CNNs automatically learn the features that maximize between-class discriminability directly from the input multivariate neural activity and proved to significantly outperform traditional machine learning approaches (e.g., linear discriminant analysis or support vector machines applied on handcrafted EEG features) in a variety of tasks (e.g., motor and attention decoding) 46 , 47 , 50 52 . Thus, the features learned by CNNs, being automatically learned on the input and providing improved decoding capabilities, likely characterize the neural processes underlying the decoded states in a more complete and reliable way than traditional machine learning approaches and, even more complete than traditional EEG analyses.…”
Section: Introductionmentioning
confidence: 99%
“…Moreover, measures of connectivity have potential applications in motor-based brain–computer interfaces; indeed, motor states can be decoded exploiting artificial intelligence approaches not only by using scalp-level EEG [ 64 , 65 ], but also from features related to brain network connectivity [ 66 ]. Interestingly, the knowledge learned by these decoders could also be exploited to analyze, in a data-driven way, the most relevant interactions for a target variable under analysis [ 64 , 65 , 67 , 68 , 69 ] (e.g., a specific movement), by designing and applying explainable artificial intelligence approaches specific for functional connectivity analyses.…”
Section: Discussionmentioning
confidence: 99%
“…Crucially, the DNN-based learning system automatically learns the most relevant neural features to realize the desired inputoutput mapping. Over the past decade, DNNs were successfully designed and applied for neural decoding [46], performing on par or even outperforming stateof-the-art machine learning approaches [16,[44][45][46][47][48][49][50][51]. Furthermore, DNNs facilitate the adoption of transfer learning.…”
Section: Introductionmentioning
confidence: 99%