2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) 2013
DOI: 10.1109/embc.2013.6610816
|View full text |Cite
|
Sign up to set email alerts
|

Information-theoretic metric learning: 2-D linear projections of neural data for visualization

Abstract: Abstract-Intracortical neural recordings are typically highdimensional due to many electrodes, channels, or units and high sampling rates, making it very difficult to visually inspect differences among responses to various conditions. By representing the neural response in a low-dimensional space, a researcher can visually evaluate the amount of information the response carries about the conditions. We consider a linear projection to 2-D space that also parametrizes a metric between neural responses. The proje… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2013
2013
2022
2022

Publication Types

Select...
3
2
2

Relationship

1
6

Authors

Journals

citations
Cited by 10 publications
(10 citation statements)
references
References 15 publications
0
10
0
Order By: Relevance
“…The classification results are tabulated in Table II, with a negligible increase of the classification rate versus the binned approach. V. CONCLUSION We presented an approach to learn weighted combinations of metrics or kernels for neural response classification, extending previous work [4]. The metric-learning hinges on a kernel matrix-based measure of entropy [3], [8].…”
Section: B Metric Learning For Spikesmentioning
confidence: 98%
See 2 more Smart Citations
“…The classification results are tabulated in Table II, with a negligible increase of the classification rate versus the binned approach. V. CONCLUSION We presented an approach to learn weighted combinations of metrics or kernels for neural response classification, extending previous work [4]. The metric-learning hinges on a kernel matrix-based measure of entropy [3], [8].…”
Section: B Metric Learning For Spikesmentioning
confidence: 98%
“…From this a measure of conditional entropy S α (B|C) = S α (B, C) − S α (C) can be applied, and this form of conditional entropy was used in previous work for learning a Mahalanobis distance [3], [4].…”
Section: Entropymentioning
confidence: 99%
See 1 more Smart Citation
“…Several other methods have been used in the recent literature: kernel autoregressive moving average (KARMA; Wong et al, 2013), quantized kernel least mean square (Li et al, 2014), support vector machines (Cao et al, 2013; Xu et al, 2013; Wang et al, 2014), K-nearest neighbors (Brockmeier et al, 2013; Ifft et al, 2013; Xu et al, 2013), naïve Bayes (Bishop et al, 2014), and artificial neural networks (Chen et al, 2013; Mahmoudi et al, 2013; Pohlmeyer et al, 2014). All of these methods allow highly non-linear neural models.…”
Section: Algorithms For Decodingmentioning
confidence: 99%
“…Also using a supervised approach, Brockmeier et al (2013) proposed a method for computing a linear dimensionality reduction which maximizes the information between the class labels and the projected neural data. The low dimensional data can be used for visualization or decoding via distance-based methods such as K-nearest neighbors.…”
Section: Neuron Selectionmentioning
confidence: 99%