2021
DOI: 10.48550/arxiv.2102.09526
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Convex regularization in statistical inverse learning problems

Abstract: We consider a statistical inverse learning problem, where the task is to estimate a function f based on noisy point evaluations of Af , where A is a linear operator. The function Af is evaluated at i.i.d. random design points u n , n = 1, ..., N generated by an unknown general probability distribution. We consider Tikhonov regularization with general convex and p-homogeneous penalty functionals and derive concentration rates of the regularized solution to the ground truth measured in the symmetric Bregman dist… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

2
45
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
1
1

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(47 citation statements)
references
References 23 publications
2
45
0
Order By: Relevance
“…• In theorem 3.3, we derive convergence rates on the symmetric Bregman distance for the regularization term R(f ) = 1 p f p X + ι + (f ) where 1 < p < 2, X is the Besov space B s p and ι + is the indicator function of the non-negativity constraint. Under suitable assumptions, our rates coincide with those of the unconstrained case proved in [7], for both noise regimes.…”
Section: Introductionsupporting
confidence: 76%
See 4 more Smart Citations
“…• In theorem 3.3, we derive convergence rates on the symmetric Bregman distance for the regularization term R(f ) = 1 p f p X + ι + (f ) where 1 < p < 2, X is the Besov space B s p and ι + is the indicator function of the non-negativity constraint. Under suitable assumptions, our rates coincide with those of the unconstrained case proved in [7], for both noise regimes.…”
Section: Introductionsupporting
confidence: 76%
“…Following the path paved by [7], we do not derive any lower bounds and we do not prove any minimaxoptimality of our results. As we briefly discuss in section 6, in line with [7,11] similar techniques can lead to minimax-optimal rates in inverse problems when considered against suitable source conditions. 1.3.…”
Section: Introductionmentioning
confidence: 81%
See 3 more Smart Citations