2014
DOI: 10.3390/e16115721
|View full text |Cite
|
Sign up to set email alerts
|

Applying Information Theory to Neuronal Networks: From Theory to Experiments

Abstract: Information-theory is being increasingly used to analyze complex, self-organizing processes on networks, predominantly in analytical and numerical studies. Perhaps one of the most paradigmatic complex systems is a network of neurons, in which cognition arises from the information storage, transfer, and processing among individual neurons. In this article we review experimental techniques suitable for validating information-theoretical predictions in simple neural networks, as well as generating new hypotheses.… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 45 publications
(62 reference statements)
0
2
0
Order By: Relevance
“…At a maximum, H = 1 bit. Numerous hypotheses, models, and theories use the above equation for the Shannon entropy of a binary random variable as a starting point for more sophisticated analyses of neuronal information content (Borst and Theunissen, 1999;Arcas et al, 2003;Victor, 2006;Sharpee and Bialek, 2007;Jensen et al, 2013;Jung et al, 2014;Sengupta et al, 2014). For example, the popular "direct method" for calculating the information content of a spike train begins by dividing its total duration into a number of evenly spaced time bins.…”
Section: Introductionmentioning
confidence: 99%
“…At a maximum, H = 1 bit. Numerous hypotheses, models, and theories use the above equation for the Shannon entropy of a binary random variable as a starting point for more sophisticated analyses of neuronal information content (Borst and Theunissen, 1999;Arcas et al, 2003;Victor, 2006;Sharpee and Bialek, 2007;Jensen et al, 2013;Jung et al, 2014;Sengupta et al, 2014). For example, the popular "direct method" for calculating the information content of a spike train begins by dividing its total duration into a number of evenly spaced time bins.…”
Section: Introductionmentioning
confidence: 99%
“…Moreover, the diversity of neural spiking strongly depends on the properties of their synapses which remarkably vary in different types of neurons. This quantity is usually calculated using “mutual information” between a spike train and the stimulus as an information theoretic approach (Kumbhani et al, 2007 ; Faghihi et al, 2013 ; Fan, 2014 ; Jung et al, 2014 ). As neural systems should be able to detect a fluctuation in a stimulus intensity, a new encoding efficiency measure has been recently introduced which uses the geometric distance between stimulus and the response of a given neuron (Faghihi and Moustafa, 2015 ).…”
Section: Introductionmentioning
confidence: 99%