2007
DOI: 10.1152/jn.00559.2007
|View full text |Cite
|
Sign up to set email alerts
|

Correcting for the Sampling Bias Problem in Spike Train Information Measures

Abstract: Information Theory enables the quantification of how much information a neuronal response carries about external stimuli and is hence a natural analytic framework for studying neural coding. The main difficulty in its practical application to spike train analysis is that estimates of neuronal information from experimental data are prone to a systematic error (called "bias"). This bias is an inevitable consequence of the limited number of stimulus-response samples that it is possible to record in a real experim… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

8
449
1
5

Year Published

2008
2008
2022
2022

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 396 publications
(465 citation statements)
references
References 26 publications
8
449
1
5
Order By: Relevance
“…A wide variety of approaches have been proposed to address this problem [Paninski, 2003; Panzeri et al, 2007]. The simplest approach is to subtract the mean of the distribution expected under the null hypothesis that there is no relationship between the two variables.…”
Section: Review Of Information Theory For Neuroimagingmentioning
confidence: 99%
“…A wide variety of approaches have been proposed to address this problem [Paninski, 2003; Panzeri et al, 2007]. The simplest approach is to subtract the mean of the distribution expected under the null hypothesis that there is no relationship between the two variables.…”
Section: Review Of Information Theory For Neuroimagingmentioning
confidence: 99%
“…We calculate the mutual information in the merged partner sequence, denoted by I merge i , which measures the predictability of conversation events that does not result from the burstiness. We do not directly compare I merge i with the original I i because the merging procedure shortens the length of the partner sequence and the amount of mutual information generally depends on the length of a sequence [36]. Instead, we carry out a bootstrap test for I merge i .…”
Section: Appendix D: Components Of the Predictability Of The Conversamentioning
confidence: 99%
“…This estimation of MI is different from nonparametric approaches in that it can access dependence that is only in reach of the classifier; thus one has to make sure that the classifier captures the main aspects of its dependence. Note that we use the naive estimator for mutual information [without bias correction (Panzeri et al 2007)]. Since all MI value calculations involve an identical number of bins-that is, two, one for each class-we can nevertheless safely compare results even for classifications with different numbers of features.…”
Section: I͓smentioning
confidence: 99%