Proceedings of the ACM International Conference on Computing Frontiers 2016
DOI: 10.1145/2903150.2903159
|View full text |Cite
|
Sign up to set email alerts
|

Decoding EEG and LFP signals using deep learning

Abstract: Deep learning technology is uniquely suited to analyse neurophysiological signals such as the electroencephalogram (EEG) and local field potentials (LFP) and promises to outperform traditional machine-learning based classification and feature extraction algorithms. Furthermore, novel cognitive computing platforms such as IBM's recently introduced neuromorphic TrueNorth chip allow for deploying deep learning techniques in an ultra-low power environment with a minimum device footprint. Merging deep learning and … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
44
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 71 publications
(46 citation statements)
references
References 26 publications
2
44
0
Order By: Relevance
“…Non-EEG information, especially gaze data and data about the fixated objects, their environment, possible types of current activity of the user and so on can be used as additional features, or, in some cases, for selecting a classifier (e.g., specific classifiers can be trained for different steps in sequences of actions, as in action triplets in our game). If sufficiently large amounts of gaze-synchronized EEG data will be harvested during the use of EBCIs, it will become possible to apply deep learning algorithms (LeCun et al, 2015; see also Nurse et al, 2016, on deep learning implementation at TrueNorth chip for EEG/ECoG/LFP data) that are able to find hidden patterns in the data and strongly improve classification performance.…”
Section: Discussionmentioning
confidence: 99%
“…Non-EEG information, especially gaze data and data about the fixated objects, their environment, possible types of current activity of the user and so on can be used as additional features, or, in some cases, for selecting a classifier (e.g., specific classifiers can be trained for different steps in sequences of actions, as in action triplets in our game). If sufficiently large amounts of gaze-synchronized EEG data will be harvested during the use of EBCIs, it will become possible to apply deep learning algorithms (LeCun et al, 2015; see also Nurse et al, 2016, on deep learning implementation at TrueNorth chip for EEG/ECoG/LFP data) that are able to find hidden patterns in the data and strongly improve classification performance.…”
Section: Discussionmentioning
confidence: 99%
“…Occlusion sensitivity techniques [92,26,175] use a similar idea, by which the decisions of the network when different parts of the input are occluded are analyzed. [135,211,86,34,87,200,182,122,170,228,164,109,204,85,25] Analysis of activations [212,194,87,83,208,167,154,109] Input-perturbation network-prediction correlation maps [149,191,67,16,150] Generating input to maximize activation [188,144,160,15] Occlusion of input [92,26,175] Several studies used backpropagation-based techniques to generate input maps that maximize activations of specific units [188,144,160,15]. These maps can then be used to infer the role of specific neurons, or the kind of input they are sensitive to.…”
Section: Inspection Of Trained Modelsmentioning
confidence: 99%
“…While feature extraction is a prerequisite for most motor BCI decoding algorithms, end-to-end learning—that is, learning from row data without any prior feature extraction—has recently been reported in several offline motor BCI studies (Wang Z. et al, 2013 ; Nurse et al, 2015b , 2016 ; Schirrmeister et al, 2017 ). In these studies, raw or preprocessed neural signals are directly fed to decoders.…”
Section: Feature Extractionmentioning
confidence: 99%
“…These models then learn how to both extract and decode useful neural signal characteristics during model identification. End-to-end learning has been investigated for movement classification (Nurse et al, 2015b , 2016 ; Schirrmeister et al, 2017 ) and trajectory prediction (Wang Z. et al, 2013 ) from EEG (Nurse et al, 2015b , 2016 ; Schirrmeister et al, 2017 ) and ECoG neural signals (Wang Z. et al, 2013 ) acquired either during motor imagery tasks or movement execution. These models generally rely on deep learning decoders, such as multi-layer perceptrons (Nurse et al, 2015b ) or convolutional neural networks and their variants (Wang Z. et al, 2013 ; Nurse et al, 2016 ; Schirrmeister et al, 2017 ).…”
Section: Feature Extractionmentioning
confidence: 99%
See 1 more Smart Citation