2020
DOI: 10.48550/arxiv.2002.10208
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Inverse learning in Hilbert scales

Abstract: We study the linear ill-posed inverse problem with noisy data in the statistical learning setting. Approximate reconstructions from random noisy data are sought with general regularization schemes in Hilbert scale. We discuss the rates of convergence for the regularized solution under the prior assumptions and a certain link condition. We express the error in terms of certain distance functions. For regression functions with smoothness given in terms of source conditions the error bound can then be explicitly … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(3 citation statements)
references
References 22 publications
0
3
0
Order By: Relevance
“…for some continuous increasing function θ : R + → R + , we can give an upper bound for the distance function. Thanks to [20, Theorem 5.9], see also [35], after rescaling, we obtain…”
Section: Assumption 44 (Complexity)mentioning
confidence: 99%
See 2 more Smart Citations
“…for some continuous increasing function θ : R + → R + , we can give an upper bound for the distance function. Thanks to [20, Theorem 5.9], see also [35], after rescaling, we obtain…”
Section: Assumption 44 (Complexity)mentioning
confidence: 99%
“…|| ||h|| H and f H satisfies a classical H ölder source condition in terms of the covariance operator T s , ensuring optimality according to [7], [5]. In [35] in the context of inverse problems, the authors derive optimality under an additional lifting condition, relating smoothness as given in terms of L −1 to smoothness in terms of T . However, it is open to show optimality, i.e.…”
Section: Assumption 44 (Complexity)mentioning
confidence: 99%
See 1 more Smart Citation