2017
DOI: 10.3389/fams.2017.00003
|View full text |Cite
|
Sign up to set email alerts
|

Optimal Rates for the Regularized Learning Algorithms under General Source Condition

Abstract: We consider the learning algorithms under general source condition with the polynomial decay of the eigenvalues of the integral operator in vector-valued function setting. We discuss the upper convergence rates of Tikhonov regularizer under general source condition corresponding to increasing monotone index function. The convergence issues are studied for general regularization schemes by using the concept of operator monotone index functions in minimax setting. Further we also address the minimum possible err… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
25
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
7
3

Relationship

5
5

Authors

Journals

citations
Cited by 21 publications
(25 citation statements)
references
References 29 publications
0
25
0
Order By: Relevance
“…Proof of estimate (4.13). We use arguments similar to those appearing in [5,24,16,7]. The treatment of generalized source conditions has been considered in [24] and we use arguments very close in spirit to that reference, deriving here inequalities that are tailored for our needs.…”
Section: Bounds In the Supervised Learning Settingmentioning
confidence: 99%
“…Proof of estimate (4.13). We use arguments similar to those appearing in [5,24,16,7]. The treatment of generalized source conditions has been considered in [24] and we use arguments very close in spirit to that reference, deriving here inequalities that are tailored for our needs.…”
Section: Bounds In the Supervised Learning Settingmentioning
confidence: 99%
“…, Theorem 2.1 share the upper convergence rates with the lower convergence rates of Theorem 3.11, 3.12 [44]. Therefore the choice of parameters is optimal.…”
Section: Convergence Analysismentioning
confidence: 99%
“…Effective dimension. Now we introduce the concept of the effective dimension which is an important ingredient to derive the rates of convergence rates [7,10,14,20,22,30]. The effective dimension is defined as…”
Section: 3mentioning
confidence: 99%