2020
DOI: 10.1109/tbme.2019.2908099
|View full text |Cite
|
Sign up to set email alerts
|

Information Theoretic Feature Transformation Learning for Brain Interfaces

Abstract: Objective: A variety of pattern analysis techniques for model training in brain interfaces exploit neural feature dimensionality reduction based on feature ranking and selection heuristics. In the light of broad evidence demonstrating the potential sub-optimality of ranking based feature selection by any criterion, we propose to extend this focus with an information theoretic learning driven feature transformation concept. Methods:We present a maximum mutual information linear transformation (MMI-LinT), and a … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 15 publications
(13 citation statements)
references
References 46 publications
(67 reference statements)
0
11
0
Order By: Relevance
“…In combination with the theoretical advancements on gradient-based methods of neural network interpretability (e.g., layer-wise relevance propagation [1,29]), obtained synergies across features as highlighted by high-dimensional feature relevances can yield significant insights based on the application domain. Such feature-synergy based ideas were particularly found interesting for feature learning in brain interfacing as we studied earlier [32,33], as well as gene expression data analysis [23] in consistency with their biological interpretations.…”
Section: Discussionmentioning
confidence: 64%
See 2 more Smart Citations
“…In combination with the theoretical advancements on gradient-based methods of neural network interpretability (e.g., layer-wise relevance propagation [1,29]), obtained synergies across features as highlighted by high-dimensional feature relevances can yield significant insights based on the application domain. Such feature-synergy based ideas were particularly found interesting for feature learning in brain interfacing as we studied earlier [32,33], as well as gene expression data analysis [23] in consistency with their biological interpretations.…”
Section: Discussionmentioning
confidence: 64%
“…We motivate our study in the light of these work, where we aim to use standard gradient descent based artificial neural network training and inference pipelines to perform nonlinear maximum mutual information based feature transformations. We previously explored this idea for neurophysiological feature transformations in brain-computer interfaces [32], which we re-address here in the context of neural networks. Recently a different line of work focused on estimating mutual information of high dimensional continuous variables over neural networks, initially proposed as mutual information neural estimation (MINE) [3].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…In doing so, they constructed a nearest neighbour graph to model the IMFs intrinsic structure. Özdenizci and Erdoğmuş [ 114 ] proposed the information theory based linear and non-linear feature transformation approach to select optimal feature for multi-class MI-EEG BCI system. Pei et al [ 71 ] used stacked auto-encoders on spectral features to reduce the dimension and achieve high accuracy in a multi class asynchronous MI-BCI system.…”
Section: Key Issues In MI Based Bcimentioning
confidence: 99%
“…Electroencephalogram (EEG) based brain-computer interface (BCI) systems aim to identify users' intentions from brain recordings with potential uses in neurorehabilitation systems [1]. However, moderate decoding accuracies have limited the practical use of BCIs [2], [3]. Due to the high data collection efforts and costs, EEG datasets highly diverge in their recording environment (e.g., stimulus), the equipment and devices, and the ground truths derived.…”
Section: Introductionmentioning
confidence: 99%