2016
DOI: 10.20944/preprints201610.0086.v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Guaranteed Bounds on Information-Theoretic Measures of Univariate Mixtures Using Piecewise Log-Sum-Exp Inequalities

Abstract: Information-theoretic measures such as the entropy, cross-entropy and the Kullback-Leibler divergence between two mixture models is a core primitive in many signal processing tasks. Since the Kullback-Leibler divergence of mixtures provably does not admit a closed-form formula, it is in practice either estimated using costly Monte-Carlo stochastic integration, approximated, or bounded using various techniques. We present a fast and generic method that builds algorithmically closed-form lower and upper bounds o… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
4
3
1
1

Relationship

2
7

Authors

Journals

citations
Cited by 14 publications
(6 citation statements)
references
References 39 publications
(12 reference statements)
0
6
0
Order By: Relevance
“…Each code is given equal initial weighting, assuming there are approximately equal amounts of each code, and an initial covariance matrix (default sigma: 10 −5 ). The GMM returns the assigned code for each bead and the log probability and probability of this assignment (using the LogSumExp algorithm, see scikit-learn documentation and [ 38 ]). Confidence interval ellipses for each cluster are calculated from the computed eigenvalues (magnitude in each direction, e .…”
Section: Experimental Methodsmentioning
confidence: 99%
“…Each code is given equal initial weighting, assuming there are approximately equal amounts of each code, and an initial covariance matrix (default sigma: 10 −5 ). The GMM returns the assigned code for each bead and the log probability and probability of this assignment (using the LogSumExp algorithm, see scikit-learn documentation and [ 38 ]). Confidence interval ellipses for each cluster are calculated from the computed eigenvalues (magnitude in each direction, e .…”
Section: Experimental Methodsmentioning
confidence: 99%
“…Consider a set of n w -mixtures [ 56 ]. Because is the negative differential entropy of a mixture (not available in closed form [ 106 ]), we approximate the untractable F by another close tractable generator . We use Monte Carlo stochastic sampling to get Monte-Carlo convex for an independent and identically distributed sample .…”
Section: Some Applications Of Information Geometrymentioning
confidence: 99%
“…The Rényi -weighted mean can be rewritten as where function denotes the log-sum-exp (convex) function [ 41 , 42 ].…”
Section: Rényi Entropy and Divergence And Sibson Information Radiusmentioning
confidence: 99%