This paper deals with Tikhonov regularization for linear and nonlinear illposed operator equations with wavelet Besov norm penalties. We focus on B 0 p,1 penalty terms which yield estimators that are sparse with respect to a wavelet frame. Our framework includes among others, the Radon transform and some nonlinear inverse problems in differential equations with distributed measurements. Using variational source conditions it is shown that such estimators achieve minimax-optimal rates of convergence for finitely smoothing operators in certain Besov balls both for deterministic and for statistical noise models.
We study Tikhonov regularization for possibly nonlinear inverse problems with weighted $$\ell ^1$$ ℓ 1 -penalization. The forward operator, mapping from a sequence space to an arbitrary Banach space, typically an $$L^2$$ L 2 -space, is assumed to satisfy a two-sided Lipschitz condition with respect to a weighted $$\ell ^2$$ ℓ 2 -norm and the norm of the image space. We show that in this setting approximation rates of arbitrarily high Hölder-type order in the regularization parameter can be achieved, and we characterize maximal subspaces of sequences on which these rates are attained. On these subspaces the method also converges with optimal rates in terms of the noise level with the discrepancy principle as parameter choice rule. Our analysis includes the case that the penalty term is not finite at the exact solution (’oversmoothing’). As a standard example we discuss wavelet regularization in Besov spaces $$B^r_{1,1}$$ B 1 , 1 r . In this setting we demonstrate in numerical simulations for a parameter identification problem in a differential equation that our theoretical results correctly predict improved rates of convergence for piecewise smooth unknown coefficients.
We present a new approach to convergence rate results for variational regularization. Avoiding Bregman distances and using image space approximation rates as source conditions we prove a nearly minimax theorem showing that the modulus of continuity is an upper bound on the reconstruction error up to a constant. Applied to Besov space regularization we obtain convergence rate results for 0, 2, q- and 0, p, p-penalties without restrictions on p, q ∈ (1, ∞). Finally we prove equivalence of Hölder-type variational source conditions, bounds on the defect of the Tikhonov functional, and image space approximation rates.
ω ∈ ∂R(f ) for some ω ∈ Y.(2.4) We refer to [61] for a comprehensive treatment of the modulus of continuity for linear operators in Hilbert spaces. We finish this section with a practicable criterion to verify order optimality.Corollary 2.7 (Order optimality via the modulus of continuity). In the setting of Proposition 2.6 suppose φ : (0, ∞) → [0, ∞) is non-decreasing and that there exists a constant c ω > 0 and δ 0 > 0 such thatThen R is an order optimal reconstruction method on K. Literature on convergence rates for sparsity promoting regularizationWe give a brief overview of the literature on convergence rate theory for 1 -regularization.In the early paper [78] from 2008 the rate O(δ 1/2 ) in the 1-norm is shown assuming that the unknown solution is sparse (i.e. has only finitely many non-vanishing entries) and that the forward operator is linear. The paper [52] provides the rate O(δ 1/2 ) for nonlinear operators under a source condition that coincides with (2.4) in the linear case. Furthermore, by additionally requiring sparsity of the unknown the authors achieve the linear rate O(δ) and discuss that in contrast to classical Tikhonov regularization, which has the highest possible rate O(δ 2/3 ), in 1 -regularization no saturation effect occurs. To the best of the author's knowledge the linear rate O(δ) was first proven in [14] for a regularization scheme similar to (2.1), which is called residual method in [52]. In [50] a linear convergence rate is shown in the more general setting of positively homogeneous functionals under the source condition (2.4) and a mild injectivity type assumption. Furthermore, in [53] it is proven (again under a mild injectivity type assumption) that the condition (2.4) is not only sufficient but even necessary for a linear convergence rate of 1 -regularization. The phenomenon of exact recovery, i.e. the question whether the support of the estimator equals the support of a sparse exact solution, is affirmatively treated in [79].However, it is usually more realistic to assume that the true solution is only approximately sparse in the sense that it can well be approximated by sparse vectors. Using a variational source condition, convergence rates are shown for non-sparse solutions in [11] for linear forward operators. Therein the analysis is based on the assumption that the unit vectors belong to the range of the adjoint operator. The rates are characterized in terms of the growth of the norms of the preimages of the unit vectors and the speed of decay of the true solution. In [3] the range condition is further discussed and the convergence rate results are extended to q -regularization with q < 1. We will discuss the latter range condition in more detail in Section 3.2.In [41] a relaxation of the condition on the unit vectors is introduced, and it is shown
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.