2013
DOI: 10.3390/e15125154
|View full text |Cite
|
Sign up to set email alerts
|

Non–Parametric Estimation of Mutual Information through the Entropy of the Linkage

Abstract: A new, non-parametric and binless estimator for the mutual information of a d-dimensional random vector is proposed. First of all, an equation that links the mutual information to the entropy of a suitable random vector with uniformly distributed components is deduced. When d = 2 this equation reduces to the well known connection between mutual information and entropy of the copula function associated to the original random variables. Hence, the problem of estimating the mutual information of the original rand… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 14 publications
(9 citation statements)
references
References 27 publications
0
8
0
Order By: Relevance
“…For fixed value of k, let (i) denotes the distance from w i ≡ (x i , y i , z i ) to its kth nearest neighbor, where distance is measured as the max-norm in the joint space, w i −w j xyz = max{ x i −x j x , y i −y j y , z i −z j z }, and the norms used in the subspaces can be arbitrary but often time the max norm is used (which is our choice for this paper) as well. From this we obtain • n xz (i): number of points (x j , z j ) ( j = i) with (x j , z j ) − (x i , z i ) xz = max{ x j − x i x , z j − z i z } < (i) • n yz (i): number of points (y j , z j ) ( j = i) with (y j , z j ) − (y i , z i ) yz = max{ y j − y i y , z j − z i z } < (i) • n z (i): number of points z j ( j = i) with z j − z i z < (i) Several recent papers have focused on reducing the finitesample bias of the KSG type of estimators [70]- [72] or developing other types of estimators [73].…”
Section: Appendix Estimation Of Cse From Datamentioning
confidence: 99%
“…For fixed value of k, let (i) denotes the distance from w i ≡ (x i , y i , z i ) to its kth nearest neighbor, where distance is measured as the max-norm in the joint space, w i −w j xyz = max{ x i −x j x , y i −y j y , z i −z j z }, and the norms used in the subspaces can be arbitrary but often time the max norm is used (which is our choice for this paper) as well. From this we obtain • n xz (i): number of points (x j , z j ) ( j = i) with (x j , z j ) − (x i , z i ) xz = max{ x j − x i x , z j − z i z } < (i) • n yz (i): number of points (y j , z j ) ( j = i) with (y j , z j ) − (y i , z i ) yz = max{ y j − y i y , z j − z i z } < (i) • n z (i): number of points z j ( j = i) with z j − z i z < (i) Several recent papers have focused on reducing the finitesample bias of the KSG type of estimators [70]- [72] or developing other types of estimators [73].…”
Section: Appendix Estimation Of Cse From Datamentioning
confidence: 99%
“…After applying equation (11) again to get c X s,t F X s (z), F X t (x) from the transition of X, equation (22) is easily reorganized into equation (19).…”
Section: Copulae and Space-time Transformationsmentioning
confidence: 99%
“…[22,25,34]). They have found application in many different fields ranging from finance and insurance [12,15], to reliability [1,33], stochastic ordering [36], geophysics [42], neuroscience [2,3,20,23,35,41], statistics [19] and many more.…”
Section: Introductionmentioning
confidence: 99%
“…Substituting Equation ( 2 ) into Equation ( 1 ) yields, where is the entropy of the k ’th marginal, to be computed using appropriate 1D estimators, and is the entropy of the copula. Using Sklar’s theorem has been previously suggested as a method for calculating the mutual information between variables, which is identical to the copula entropy [ 5 , 28 , 29 , 30 ]. The new approach here is in showing that can be efficiently estimated recursively, similar to the k DP approach.…”
Section: Introductionmentioning
confidence: 99%