2017
DOI: 10.3390/e19040150
|View full text |Cite
|
Sign up to set email alerts
|

Minimum Sample Size for Reliable Causal Inference Using Transfer Entropy

Abstract: Transfer Entropy has been applied to experimental datasets to unveil causality between variables. In particular, its application to non-stationary systems has posed a great challenge due to restrictions on the sample size. Here, we have investigated the minimum sample size that produces a reliable causal inference. The methodology has been applied to two prototypical models: the linear model autoregressive-moving average and the non-linear logistic map. The relationship between the Transfer Entropy value and t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
18
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 16 publications
(19 citation statements)
references
References 20 publications
0
18
0
Order By: Relevance
“…However, small data sets will increase the likelihood of type 2 errors (i.e., failing to report a significant information theory result when one is actually present) from surrogate data significance testing and produce bias (see Bias in Entropy and Mutual Information ). The number of observations necessary to estimate a probability distribution has been explored to some extent in the literature (Ramos and Macau, 2017), but a great deal of attention has been paid to other methods to assess bias and estimate probability distributions (see Handling Continuous Data and Further Refinements ).…”
Section: Methodsmentioning
confidence: 99%
“…However, small data sets will increase the likelihood of type 2 errors (i.e., failing to report a significant information theory result when one is actually present) from surrogate data significance testing and produce bias (see Bias in Entropy and Mutual Information ). The number of observations necessary to estimate a probability distribution has been explored to some extent in the literature (Ramos and Macau, 2017), but a great deal of attention has been paid to other methods to assess bias and estimate probability distributions (see Handling Continuous Data and Further Refinements ).…”
Section: Methodsmentioning
confidence: 99%
“…The number of discrete states is determined by the length of the time series. 52 Two time series are then considered causally related if the information, or entropy in one discrete signal is reduced given information about the other discrete signal. The difference in entropy between the signal by itself and the signal conditioned on the second variable is the transfer entropy.…”
Section: Methodsmentioning
confidence: 99%
“…In this work, we examined three different methods of discretizing the state space, based on methods previously useful in biological time series 41 (uniformly spaced bins, bins whose size is inferred by kernel density estimation on a normal probability distribution fit to the data, and bins whose size is learned via the Darbellay-Vajda algorithm). 53 As the length of our time series informs the number of states used, 52 four were used in our analysis. We only considered a result to be positive if it was positive using all three discretization schemes.…”
Section: Methodsmentioning
confidence: 99%
“…The present investigation refers to the seasonal, unfiltered time series from 1948 to 2000. We therefore have a statistical volume of 53 values, sufficient for entropy transfer analysis [34].…”
Section: Datamentioning
confidence: 99%