2011
DOI: 10.1111/j.1467-842x.2011.00633.x
|View full text |Cite
|
Sign up to set email alerts
|

Estimation of Multivariate Shannon Entropy Using Moments

Abstract: Three new entropy estimators of multivariate distributions are introduced. The two cases considered here concern when the distribution is supported by a unit sphere and by a unit cube. In the former case, the consistency and the upper bound of the absolute error for the proposed entropy estimator are established. In the latter one, under the assumption that only the moments of the underlying distribution are available, a non-traditional estimator of the entropy is suggested. We also study the practical perform… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2011
2011
2016
2016

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 17 publications
0
4
0
Order By: Relevance
“…Another entropy estimator for hyperspherical data was developed recently by Mnatsakanov et al [10] using MR approach. We call this estimator the MR entropy estimator and denote it by H (MR) n (f ):…”
Section: Comparison With the Moment-recovered Constructionmentioning
confidence: 99%
See 2 more Smart Citations
“…Another entropy estimator for hyperspherical data was developed recently by Mnatsakanov et al [10] using MR approach. We call this estimator the MR entropy estimator and denote it by H (MR) n (f ):…”
Section: Comparison With the Moment-recovered Constructionmentioning
confidence: 99%
“…In Section 4, we present simulation studies using uniform hyperspherical distributions and aforementioned vMF probability models. In addition, the knn entropy estimator is compared with the MR approach proposed in Mnatsakanov et al [10]. We conclude this study in Section 5.…”
Section: Introductionmentioning
confidence: 96%
See 1 more Smart Citation
“…These approaches directly estimate the MI by using the K-nearest neighborhood (KNN) method [98][99][100][101][102][103][104][105]. They compared those two approaches with the KNN entropy estimator [97,99,[106][107][108][109][110][111][112][113], which estimates the MI indirectly from the entropies by using yeast expression dataset with 6000 genes and 300 samples. The bias caused by the separately estimation of H(X) , H( Y ), and H( X , Y ) is decreased in the MI (1) and MI (2) methods.…”
Section: Analysis Of Bs Kde and Bub Estimatorsmentioning
confidence: 99%