Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
2010
DOI: 10.1007/s10994-010-5179-6
|View full text |Cite
|
Sign up to set email alerts
|

Polynomial regression under arbitrary product distributions

Abstract: In recent work, Kalai, Klivans, Mansour, and Servedio (2005) studied a variant of the "Low-Degree (Fourier) Algorithm" for learning under the uniform probability distribution on {0, 1} n . They showed that the L 1 polynomial regression algorithm yields agnostic (tolerant to arbitrary noise) learning algorithms with respect to the class of threshold functions-under certain restricted instance distributions, including uniform on {0, 1} n and Gaussian on R n . In this work we show how all learning results based o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
38
0

Year Published

2010
2010
2020
2020

Publication Types

Select...
5
3
2

Relationship

2
8

Authors

Journals

citations
Cited by 31 publications
(39 citation statements)
references
References 22 publications
1
38
0
Order By: Relevance
“…The Efron-Stein decomposition method has found numerous applications in statistics [8,13,19], hardness of approximation [1,14,15], learning theory [4], and social choice theory [15]. As we see below, the method is also particularly well suited for the analysis of juntas.…”
Section: Our Techniquesmentioning
confidence: 99%
“…The Efron-Stein decomposition method has found numerous applications in statistics [8,13,19], hardness of approximation [1,14,15], learning theory [4], and social choice theory [15]. As we see below, the method is also particularly well suited for the analysis of juntas.…”
Section: Our Techniquesmentioning
confidence: 99%
“…[22,26,3,15,23,39]. Let DX be a (fixed, known) distribution over an example space X such as the uniform distribution over {−1, 1} n or the standard multivariate Gaussian distribution N (0, In) over R n .…”
Section: Application: Agnostically Learning Constantdegree Ptfs In Pomentioning
confidence: 99%
“…The "polynomial kernel" is a popular kernel to use in this way; when, as is usually the case, the degree parameter in the polynomial kernel is set to be a small constant, these algorithms output hypotheses that are equivalent to low-degree PTFs. Low-degree PTFs are also used as hypothe-ses in several important learning algorithms with a more complexity-theoretic flavor, such as the lowdegree algorithm of Linial et al [21] and its variants [12,22], including some algorithms for distributionspecific agnostic learning [14,20,3,6].…”
mentioning
confidence: 99%