2022
DOI: 10.48550/arxiv.2203.09659
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Low-degree learning and the metric entropy of polynomials

Abstract: Let F n,d be the class of all functions f : {−1, 1} n → [−1, 1] on the n-dimensional discrete hypercube of degree at most d. In the first part of this paper, we prove that any (deterministic or randomized) algorithm which learns F n,d with L 2 -accuracy ε requires at least Ω((1 − √ ε)2 d log n) queries for large enough n, thus establishing the sharpness as n → ∞ of a recent upper bound of Eskenazis and Ivanisvili (2021). To do this, we show that the L 2 -packing numbers2 Cd log n ε 4 for large enough n, where… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 28 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?