2020 IEEE International Symposium on Information Theory (ISIT) 2020
DOI: 10.1109/isit44484.2020.9174474
|View full text |Cite
|
Sign up to set email alerts
|

On Learning Parametric Non-Smooth Continuous Distributions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 13 publications
0
5
0
Order By: Relevance
“…In this paper, we tackle the problem of empirical distribution mismatch, which is a key issue in semi-supervised learning (e.g., [21,5,50]). In principle, the labeled and the unlabeled training samples are assumed to be drawn from an identical distribution.…”
Section: Class-specific Distribution Alignmentmentioning
confidence: 99%
“…In this paper, we tackle the problem of empirical distribution mismatch, which is a key issue in semi-supervised learning (e.g., [21,5,50]). In principle, the labeled and the unlabeled training samples are assumed to be drawn from an identical distribution.…”
Section: Class-specific Distribution Alignmentmentioning
confidence: 99%
“…Here (12) follows from the fact that conditioned on N 2i−1 + N 2i , the log (N 2i−1 + 1) + log (N 2i + 1) terms are independent. (13) follows from that all N 2i−1 + N 2i 's have the same distribution and the linearity of sum of expectations.…”
Section: Lower Boundmentioning
confidence: 99%
“…Formalizing this heuristic might be an interesting future research direction. By [12], as n → ∞ for the uniform distribution,…”
Section: Acknowledgmentsmentioning
confidence: 99%
See 1 more Smart Citation
“…Sequential probability assignment is a classic topic in information theory with extensive literature, see the survey by Merhav and Feder [1998] and the references within. In particular, the idea of probability assignments that are Bayesian mixtures over the reference class of distributions [Krichevsky and Trofimov, 1981] is of central importance-such mixture probability assignments arise as the optimal solution to several operational information theoretic and statistical problems [Kamath et al, 2015]. It is also known that the Bayesian mixture approach often outperforms the "plug-in" approach of estimating a predictor from the reference class and then playing it [Merhav and Feder, 1998].…”
Section: A Related Workmentioning
confidence: 99%