2019
DOI: 10.1088/1361-6420/ab0b15
|View full text |Cite
|
Sign up to set email alerts
|

Optimal convergence rates for sparsity promoting wavelet-regularization in Besov spaces

Abstract: This paper deals with Tikhonov regularization for linear and nonlinear illposed operator equations with wavelet Besov norm penalties. We focus on B 0 p,1 penalty terms which yield estimators that are sparse with respect to a wavelet frame. Our framework includes among others, the Radon transform and some nonlinear inverse problems in differential equations with distributed measurements. Using variational source conditions it is shown that such estimators achieve minimax-optimal rates of convergence for finitel… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

5
49
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
1

Relationship

3
3

Authors

Journals

citations
Cited by 17 publications
(54 citation statements)
references
References 44 publications
5
49
0
Order By: Relevance
“…For p > 1 only group sparsity in the levels is enforced, but not sparsity of the wavelet coefficients within each level. As a main result of this paper we demonstrate that the analysis in [24] as well as other works to be discussed below do not capture the full potential of estimators (1), i.e. the most commonly used case p = 1: Even though the error bounds in [24] are optimal in a minimax sense, more precisely in a worst case scenario in B s p,∞ -balls, we will derive faster rates of convergence for an important class of functions, which includes piecewise smooth functions.…”
Section: Introductionsupporting
confidence: 56%
See 2 more Smart Citations
“…For p > 1 only group sparsity in the levels is enforced, but not sparsity of the wavelet coefficients within each level. As a main result of this paper we demonstrate that the analysis in [24] as well as other works to be discussed below do not capture the full potential of estimators (1), i.e. the most commonly used case p = 1: Even though the error bounds in [24] are optimal in a minimax sense, more precisely in a worst case scenario in B s p,∞ -balls, we will derive faster rates of convergence for an important class of functions, which includes piecewise smooth functions.…”
Section: Introductionsupporting
confidence: 56%
“…As a main result of this paper we demonstrate that the analysis in [24] as well as other works to be discussed below do not capture the full potential of estimators (1), i.e. the most commonly used case p = 1: Even though the error bounds in [24] are optimal in a minimax sense, more precisely in a worst case scenario in B s p,∞ -balls, we will derive faster rates of convergence for an important class of functions, which includes piecewise smooth functions. The crucial point is that such functions also belong to Besov spaces with larger smoothness index s, but smaller integrability index p < 1.…”
Section: Introductionsupporting
confidence: 56%
See 1 more Smart Citation
“…We believe that our upper bounds are too pessimistic, this can be seen e.g. by comparing with the case q = 1 treated in [HM18]. But note that in this case the VSC is not formulated with respect to the Bregman distance.…”
Section: Stability Estimates For Electrical Impedance Tomographymentioning
confidence: 89%
“…An analysis where the sparsity assumption is violated was developed in a series of papers starting with [BFH13] and leading to [FG18]. For an analysis in the spirit of the setup presented here that extends to the nonlinear case see [HM18]. Furthermore, the results have been extended to elastic net regularization in [CHZ17].…”
mentioning
confidence: 99%