2020 Information Theory and Applications Workshop (ITA) 2020
DOI: 10.1109/ita50056.2020.9244988
|View full text |Cite
|
Sign up to set email alerts
|

Universal Bayes Consistency in Metric Spaces

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
2
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 37 publications
0
4
0
Order By: Relevance
“…In addition to restriction on the hypothesis class, there is extensive research on universal consistency under stochastic process assumptions on observations [29,1,12,11,6,5]. These works on universal learning typically utilize a k-nearest neighbor type algorithm for the constructive proof, which doesn't fully align with practical scenarios of human reasoning and scientific discover.…”
Section: Non-uniform Learningmentioning
confidence: 99%
“…In addition to restriction on the hypothesis class, there is extensive research on universal consistency under stochastic process assumptions on observations [29,1,12,11,6,5]. These works on universal learning typically utilize a k-nearest neighbor type algorithm for the constructive proof, which doesn't fully align with practical scenarios of human reasoning and scientific discover.…”
Section: Non-uniform Learningmentioning
confidence: 99%
“…The interplay between learning and compression has long been recognized, popularized by Occam's razor rule of thumb (Ariew 1976), and rigorously studied in several frameworks, including in information theory in terms of the minimum description length principal and Kolmogorov complexity (Cover 1999;Li, Vitányi et al 2008), and more recently in terms of sample compression schemes in the PAC statistical learning framework (Littlestone and Warmuth 1986;Floyd and Warmuth 1995;Graepel, Herbrich, and Shawe-Taylor 2005;Gottlieb, Kontorovich, and Nisnevitch 2014;David, Moran, and Yehudayoff 2016;Hanneke, Kontorovich, and Sadigurschi 2019;Bousquet et al 2020;Alon et al 2021).…”
Section: Introductionmentioning
confidence: 99%
“…Cérou and Guyader [2006] proved that, if ties are broken uniformly at random, the k m -Nearest Neighbor classifier is consistent in any Polish metric space in which the Lebesgue-Besicovitch differentiation theorem holds (see also [Forzani et al, 2012, Chaudhuri andDasgupta, 2014]). Finally, it was recently shown that is possible to combine compression techniques with 1-Nearest Neighbor classification in order achieve consistency in essentially separable metric spaces, even if the Lebesgue-Besicovitch differentiation theorem does not hold [Hanneke et al, 2019].…”
Section: Introductionmentioning
confidence: 99%