2020
DOI: 10.1016/j.acha.2018.09.009
|View full text |Cite
|
Sign up to set email alerts
|

Optimal rates for spectral algorithms with least-squares regression over Hilbert spaces

Abstract: We investigate regularized algorithms combining with projection for least-squares regression problem over a Hilbert space, covering nonparametric regression over a reproducing kernel Hilbert space. We prove convergence results with respect to variants of norms, under a capacity assumption on the hypothesis space and a regularity condition on the target function. As a result, we obtain optimal rates for regularized algorithms with randomized sketches, provided that the sketch dimension is proportional to the ef… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

1
49
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 39 publications
(52 citation statements)
references
References 33 publications
1
49
0
Order By: Relevance
“…Eigenfunction assumptions were removed in [12] by using traditional integral operator and extended to semi-supervised learning [20] and multi-pass SGD [15]. Rudi and Rosasco derived the optimal statistical error bounds of random features [13] by applying standard integral operator framework [10,11] into feature space, and the result was further studied in [21] and [22]. Table 1 reports the statistical and computational properties of related approaches and our main results.…”
Section: Methodsmentioning
confidence: 99%
“…Eigenfunction assumptions were removed in [12] by using traditional integral operator and extended to semi-supervised learning [20] and multi-pass SGD [15]. Rudi and Rosasco derived the optimal statistical error bounds of random features [13] by applying standard integral operator framework [10,11] into feature space, and the result was further studied in [21] and [22]. Table 1 reports the statistical and computational properties of related approaches and our main results.…”
Section: Methodsmentioning
confidence: 99%
“…They are optimal under the additional assumption s − β ≥ d 2 . Corollary 4.4 in Lin et al (2020) implies risk upper bound of order n − 2ζ (2ζ+γ)∨1 where parameter ζ is the power of the so-called source condition (see Engl et al (2000) for more details on source condition and also see Blanchard et al (2007) for the statistical perspective) and γ is the decay rate of the effective dimension. In the case of Sobolev RKHS W s (X ) and ball in W β p (X ) we have…”
Section: General Comparison To the Setting Of Statistical Nonparametr...mentioning
confidence: 99%

Online nonparametric regression with Sobolev kernels

Zadorozhnyi,
Gaillard,
Gerschinovitz
et al. 2021
Preprint
Self Cite
“…Moreover, our analysis does not make any assumptions on the dimensionality of the input/output RKHS or the compactness of the latent spaces. These relaxations do introduce a slight tradeoff of requiring the polynomial eigendecay of the covariance operator, a standard assumption in regularized least-squares problems [1,12,13]. In a sense, our approach hybridizes the two aforementioned frameworks -namely, like [17] we construct conditional embeddings as operators, and characterize the convergence of the sample estimator via the spectral structure of the target embedding operator.…”
Section: Introductionmentioning
confidence: 99%