2014
DOI: 10.48550/arxiv.1411.2003
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Efficient Estimation of Mutual Information for Strongly Dependent Variables

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
17
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 32 publications
(18 citation statements)
references
References 21 publications
0
17
0
Order By: Relevance
“…Figures 2 and 3, for the two and three component models, respectively, show the results of calculations for k = 5. For a detailed study of dependence on k we refer the readers to [27].…”
Section: Toy Model For Multiplicative Processesmentioning
confidence: 99%
“…Figures 2 and 3, for the two and three component models, respectively, show the results of calculations for k = 5. For a detailed study of dependence on k we refer the readers to [27].…”
Section: Toy Model For Multiplicative Processesmentioning
confidence: 99%
“…The estimation of the Shannon mutual information from samples remains an active research problem. Lately, in theoretical as well as practical fronts, there has a resurgence of interest in entropy and mutual information estimators (see Sricharan et al (2013), Jiao et al (2014), Singh et Pøczos (2017), Singh and Póczos (2016), Moon et al (2017), Han et al (2015), Gao et al (2014), Gao et al (2015), Gao et al (2016a), Gao et al (2016b), Angeliki & Dimitris (2009, Walters et al (2009), and some offer good results even for small samples (Khan et al (2007)).…”
Section: E(pmentioning
confidence: 99%
“…The current solution is to use the Blahut-Arimoto algorithm [29], which essentially enumerates over all states, thus being limited to small-scale problems and not being applicable to the continuous domain. More scalable non-parametric estimators have been developed [7,6]: these have a high memory footprint or require a very large number of observations, any approximation may not be a bound on the MI making reasoning about correctness harder, and they cannot easily be composed with existing (gradient-based) systems that allow us to design a unified (end-to-end) system. In the continuous domain, Monte Carlo integration has been proposed [10], but applications of Monte Carlo estimators can require a large number of draws to obtain accurate solutions and manageable variance.…”
Section: Scalable Information Maximisationmentioning
confidence: 99%