2012
DOI: 10.3390/e14061103
|View full text |Cite
|
Sign up to set email alerts
|

Minimum Mutual Information and Non-Gaussianity Through the Maximum Entropy Method: Theory and Properties

Abstract: Abstract:The application of the Maximum Entropy (ME) principle leads to a minimum of the Mutual Information (MI), I(X,Y), between random variables X,Y, which is compatible with prescribed joint expectations and given ME marginal distributions. A sequence of sets of joint constraints leads to a hierarchy of lower MI bounds increasingly approaching the true MI. In particular, using standard bivariate Gaussian marginal distributions, it allows for the MI decomposition into two positive terms: the Gaussian MI (I g… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
21
0

Year Published

2013
2013
2022
2022

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 18 publications
(21 citation statements)
references
References 41 publications
0
21
0
Order By: Relevance
“…The authors have presented in [12] (PP12 hereafter), a general formalism for computing, though not in an explicit form, the MinMI in terms of multiple (m cr > 1) linear and nonlinear cross expectations included in T cr This set can consist of a natural population constraint (e.g., a specific neural behavior) or it can grow without limit through additional expectations computed within a sample with the MinMI increasing and converging eventually to the total MI. This paper is the natural follow-up of PP12 [12], studying now the statistics (mean or bias, variance and distribution) of the MinMI estimation errors: min, H is the ME estimation issued from N-sized samples of iid outcomes. Those errors are roughly similar to those of MI and entropy generic estimator's errors (see [13,14] for a thorough review and performance comparisons between MI estimators).…”
Section: The State Of the Artmentioning
confidence: 99%
See 4 more Smart Citations
“…The authors have presented in [12] (PP12 hereafter), a general formalism for computing, though not in an explicit form, the MinMI in terms of multiple (m cr > 1) linear and nonlinear cross expectations included in T cr This set can consist of a natural population constraint (e.g., a specific neural behavior) or it can grow without limit through additional expectations computed within a sample with the MinMI increasing and converging eventually to the total MI. This paper is the natural follow-up of PP12 [12], studying now the statistics (mean or bias, variance and distribution) of the MinMI estimation errors: min, H is the ME estimation issued from N-sized samples of iid outcomes. Those errors are roughly similar to those of MI and entropy generic estimator's errors (see [13,14] for a thorough review and performance comparisons between MI estimators).…”
Section: The State Of the Artmentioning
confidence: 99%
“…Then, the MI ( , ) I X Y becomes the negative copula entropy [24,25] given by In PP12 [12], we have generalized this problem to a less constrained MinMI version by changing marginal RVs into ME prescribed ones-the ME-morphisms (e.g., standard Gaussians)-and imposing a finite set of marginal constraints instead of the full marginal PDFs. Under these conditions, the number of control Lagrange multipliers is finite, leaving the possibility of using nonlinear minimization algorithms for the MinMI estimation, as already tested in [8] We further provide asymptotic analytical N-scaled formulas for the variance and distribution of MinMI estimation errors as functions of statistics of the ME cross constraints estimation errors.…”
Section: The Rationale Of the Papermentioning
confidence: 99%
See 3 more Smart Citations