2018 IEEE International Conference on Pervasive Computing and Communications (PerCom) 2018
DOI: 10.1109/percom.2018.8444575
|View full text |Cite
|
Sign up to set email alerts
|

Converting Your Thoughts to Texts: Enabling Brain Typing via Deep Feature Learning of EEG Signals

Abstract: An electroencephalography (EEG) based Brain Computer Interface (BCI) enables people to communicate with the outside world by interpreting the EEG signals of their brains to interact with devices such as wheelchairs and intelligent robots. More specifically, motor imagery EEG (MI-EEG), which reflects a subject's active intent, is attracting increasing attention for a variety of BCI applications. Accurate classification of MI-EEG signals while essential for effective operation of BCI systems, is challenging due … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
73
0
1

Year Published

2018
2018
2021
2021

Publication Types

Select...
4
3
1

Relationship

2
6

Authors

Journals

citations
Cited by 81 publications
(75 citation statements)
references
References 33 publications
1
73
0
1
Order By: Relevance
“…Deep learning has been recently attracting much attention in both academia and industry, due to its excellent performance on various research areas such as computer vision, speech recognition, natural language processing, and braincomputer interface [15]. Nevertheless, deep learning faces an important challenge that the performance of the algorithm highly depends on the selection of hyper-parameters.…”
Section: Introductionmentioning
confidence: 99%
“…Deep learning has been recently attracting much attention in both academia and industry, due to its excellent performance on various research areas such as computer vision, speech recognition, natural language processing, and braincomputer interface [15]. Nevertheless, deep learning faces an important challenge that the performance of the algorithm highly depends on the selection of hyper-parameters.…”
Section: Introductionmentioning
confidence: 99%
“…In order to further reduce the dimensionality of the spatiotemporal encodings and cancel background noise effects [22], we train an unsupervised deep autoenoder (DAE) on the fused heterogeneous features produced by the combined CNN and LSTM information. The DAE forms our second level of hierarchy, with 3 encoding and 3 decoding layers, and mean squared error (MSE) as the cost function.…”
Section: Deep Autoencoder For Spatio-temporal Informationmentioning
confidence: 99%
“…At the third level of hierarchy, the discrete latent vector representation of the deep autoencoder is fed into an Extreme Gradient Boost based classification layer [23,24] motivated by [22]. It is a regularized gradient boosted decision tree that performs well on structured problems.…”
Section: Classification With Extreme Gradient Boostmentioning
confidence: 99%
“…To decode the target command, EEG-enabled BCI device primarily benefit from state-of-the-art machine learning algorithms. Methods such as deep neural networks [11], [12], generative models [13] and Bayesian models [15] have shown satisfactory performance in these systems.…”
Section: Dorian8x8@berkeleyedumentioning
confidence: 99%