2018
DOI: 10.1002/cpe.4413
|View full text |Cite
|
Sign up to set email alerts
|

Short time Fourier transformation and deep neural networks for motor imagery brain computer interface recognition

Abstract: Summary Motor imagery (MI) is an important control paradigm in the field of brain‐computer interface (BCI), which enables the recognition of personal intention. So far, numerous methods have been designed to classify EEG signal features for MI task. However, deep neural networks have been seldom applied to analyze EEG signals. In this study, two novel kinds of deep learning schemes based on convolutional neural networks (CNN) and Long Short‐Term Memory (LSTM) were proposed for MI‐classification. The frequency … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
40
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 73 publications
(40 citation statements)
references
References 32 publications
(37 reference statements)
0
40
0
Order By: Relevance
“…Methods vary across these studies and include grid search [47,48], Bayesian methods [49][50][51] (one fails to report the specific approach [49]), trial and error [24,52], and unstated approaches, likely indicating trial and error [53,54]. In six of these studies, only partial results are reported in relation to HPs [24,48,50,51,53,54]. For example, in an otherwise excellent paper [50], only present optimal values for structural parameters were tested and the authors completely fail to report on the effects of optimizing learning rate and learning rate decay.…”
Section: Introductionmentioning
confidence: 99%
“…Methods vary across these studies and include grid search [47,48], Bayesian methods [49][50][51] (one fails to report the specific approach [49]), trial and error [24,52], and unstated approaches, likely indicating trial and error [53,54]. In six of these studies, only partial results are reported in relation to HPs [24,48,50,51,53,54]. For example, in an otherwise excellent paper [50], only present optimal values for structural parameters were tested and the authors completely fail to report on the effects of optimizing learning rate and learning rate decay.…”
Section: Introductionmentioning
confidence: 99%
“…As future work, to enhance the impact of tested Deep Learning models, we plan to employ datasets that hold more labeled MI tasks, fusing CNNs with different characteristics and architectures is also to be considered to learn more complex relationships between spatial patterns and extracted t-f representations, making the learned CNN weights more accessible to interpret [53,54].…”
Section: Discussion and Concluding Remarksmentioning
confidence: 99%
“…As future work, to enhance the impact of tested Deep Learning models, we plan to employ datasets that hold more labeled MI tasks, fusing CNNs with different characteristics and architectures is also to be considered to learn more complex relationships between spatial patterns and extracted t-f representations, making the learned CNN weights be more accessible to interpret [49,50].…”
Section: A02tmentioning
confidence: 99%