2019
DOI: 10.1080/02331888.2019.1694931
|View full text |Cite
|
Sign up to set email alerts
|

Faster convergence rate for functional linear regression in reproducing kernel Hilbert spaces

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
5
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(7 citation statements)
references
References 18 publications
2
5
0
Order By: Relevance
“…Since it aggressively targets the reduction of residual errors, it is commonly observed in practical applications that the CG method achieves convergence in signicantly fewer iterations compared to other gradient descent techniques, as discussed in the context of kernel learning by [13] and [8]. We obtain a convergence rate for β − β * L 2 (S) and show it to align with the minimax rates of the FLR model [6,27], thereby establishing the minimax optimality of our estimator.…”
Section: Introductionsupporting
confidence: 54%
See 2 more Smart Citations
“…Since it aggressively targets the reduction of residual errors, it is commonly observed in practical applications that the CG method achieves convergence in signicantly fewer iterations compared to other gradient descent techniques, as discussed in the context of kernel learning by [13] and [8]. We obtain a convergence rate for β − β * L 2 (S) and show it to align with the minimax rates of the FLR model [6,27], thereby establishing the minimax optimality of our estimator.…”
Section: Introductionsupporting
confidence: 54%
“…The analysis depends on the eigenvalue behaviour of the operator Λ = T Note that the assumption implies β * ∈ H with additional smoothness. In [27], the authors use this source condition to derive the minimax and faster convergence rate for the Tikhonov regularization with 0 < α ≤ 1 2 .…”
Section: The Main Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…A6* is more interpretable and easy to verify than A6. Its is also more commonly used in literature (Brownlees et al, 2015;Zhang et al, 2020;Paul et al, 2021b).…”
Section: Alternative Assumptionsmentioning
confidence: 99%
“…Linear models, especially regularized ones are well known to admit faster rates of convergence than the ERM rate of O(1/ √ n). For example, SVMs (Steinwart and Scovel, 2007), linear models (Sridharan et al, 2008) and more recently functional linear models (Zhang et al, 2020) are all known to achieve an error rate of O(1/n). It should come as no surprise that MoM estimators also achieve this so-called "fast rate" under additional assumptions.…”
Section: Fast Ratesmentioning
confidence: 99%