Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing 2021
DOI: 10.1145/3406325.3451066
|View full text |Cite
|
Sign up to set email alerts
|

Near-optimal learning of tree-structured distributions by Chow-Liu

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
4
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 10 publications
(5 citation statements)
references
References 31 publications
1
4
0
Order By: Relevance
“…We provided concentration bounds for the KL divergence between the underlying distribution and the Laplace estimator. Our results show that the dependence on the error probability can be bounded as Õ( √ k log 5/2 (1/δ)/n), which improves on the previous bound of Õ(k log(1/δ)/n) recently obtained by [4]. We further established a lower bound of Ω( √ k/n) on the variance and the tail bound of the KL loss of the Laplace estimator, thus showing our results are nearly-optimal.…”
Section: Discussionsupporting
confidence: 85%
See 3 more Smart Citations
“…We provided concentration bounds for the KL divergence between the underlying distribution and the Laplace estimator. Our results show that the dependence on the error probability can be bounded as Õ( √ k log 5/2 (1/δ)/n), which improves on the previous bound of Õ(k log(1/δ)/n) recently obtained by [4]. We further established a lower bound of Ω( √ k/n) on the variance and the tail bound of the KL loss of the Laplace estimator, thus showing our results are nearly-optimal.…”
Section: Discussionsupporting
confidence: 85%
“…We refer the reader to Theorem 2 for the detailed statement, including leading constants and a bound holding for all regimes of n. We emphasize that, in contrast to the previous bound given in (5), the additional t δ term is here sublinear in k, and in particular negligible in front of the expectation term E[KL(p p1 )] for most values of δ. Viewed differently, our result improves on that of [4] for all δ ≥ exp(− Õ(k 1/3 )).…”
Section: Introductionmentioning
confidence: 44%
See 2 more Smart Citations
“…Clearly, Bayes nets satisfy both conditions (where "efficient" means as usual polynomial in the number of parameters). Bhattacharyya, Gayen, Meel and Vinodchandran [BGMV20] extended this idea to develop polynomial-time algorithms for additively approximating the TV distance between two bounded in-degree Bayes nets using a polynomial number of samples from each.…”
Section: Distance Computationmentioning
confidence: 99%