2014 IEEE International Symposium on Information Theory 2014
DOI: 10.1109/isit.2014.6875157
|View full text |Cite
|
Sign up to set email alerts
|

Efficient compression of monotone and m-modal distributions

Abstract: We consider universal compression of n samples drawn independently according to a monotone or m-modal distribution over k elements. We show that for all these distributions, the per-sample redundancy diminishes to 0 if k = exp(o(n/ log n)) and is at least a constant if k = exp(Ω(n)).

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
3
0

Year Published

2014
2014
2020
2020

Publication Types

Select...
4
1

Relationship

3
2

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 26 publications
(20 reference statements)
1
3
0
Order By: Relevance
“…In [Bir87], Birgé showed that any monotone distribution p over [n] can be obliviously decomposed into O(log(n)/ε) intervals, such that the flattening p (recall Definition 6) of p over these intervals is ε-close to p in total variation distance. [AJOS14] extend this result, giving a bound between the χ 2 -distance of p and p. We strengthen these results by extending them to monotone distributions over [n] d . In particular, we partition the domain [n] d of p into O((d log(n)/ε 2 ) d ) rectangles, and compare it with p, the flattening over these rectangles.…”
Section: Testing Monotonicitysupporting
confidence: 64%
See 1 more Smart Citation
“…In [Bir87], Birgé showed that any monotone distribution p over [n] can be obliviously decomposed into O(log(n)/ε) intervals, such that the flattening p (recall Definition 6) of p over these intervals is ε-close to p in total variation distance. [AJOS14] extend this result, giving a bound between the χ 2 -distance of p and p. We strengthen these results by extending them to monotone distributions over [n] d . In particular, we partition the domain [n] d of p into O((d log(n)/ε 2 ) d ) rectangles, and compare it with p, the flattening over these rectangles.…”
Section: Testing Monotonicitysupporting
confidence: 64%
“…Birgé [Bir87] showed that any monotone distribution is estimated to a total variation ε with a O(log(n)/ε)-piecewise constant distribution. Moreover, the intervals over which the output is constant is independent of the distribution p. This result, was strengthened to the Kullback-Leibler divergence by [AJOS14] to study the compression of monotone distributions. They upper bound the KL divergence by χ 2 distance and then bound the χ 2 distance.…”
Section: C1 a Structural Results For Monotone Distributions On The Hy...mentioning
confidence: 90%
“…The second approach restricted the class of distributions compressed. A series of works studied class of monotone distributions Shamir [2013], Acharya et al [2014a]. Recently, Acharya et al [2014a] showed that the class M k of monotone distributions over {1, .…”
Section: Previous Resultsmentioning
confidence: 99%
“…[19] studied the class M n k of length−n sequences from monotone distributions over [k], and tightly characterized the redundancy for any k = O(n). Recently, [3] studied M n k for much larger range of k and in particular showed that the class is universally compressible for all k = exp(o(n log n)) and is not universally compressible for any k = exp(Ω(n)).…”
Section: Introductionmentioning
confidence: 99%