2011
DOI: 10.1088/0266-5611/27/10/105007
|View full text |Cite
|
Sign up to set email alerts
|

Convergence rates for Morozov's discrepancy principle using variational inequalities

Abstract: We derive convergence rates for Tikhonov-type regularization with convex penalty terms, where the regularization parameter is chosen according to Morozov's discrepancy principle and variational inequalities are used to generalize classical source and nonlinearity conditions. Rates are obtained first with respect to the Bregman distance and a Taylor-type distance and those results are combined to derive rates in norm and the penalty term topology.For the special case of the sparsity promoting weighted ℓp-norms … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

2
61
0

Year Published

2014
2014
2020
2020

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 33 publications
(63 citation statements)
references
References 21 publications
2
61
0
Order By: Relevance
“…We may add that results for different parameter choice rules do exist, e.g., for the discrepancy principle [1]. We also would like to mention convergence speed results, which require, e.g., source conditions for the solution [2,3,8,14,26,27,41,51]. Convergence and convergence rates for sparse regularization in a Bayes setup have been recently investigated in [25].…”
Section: Tikhonov Regularization With Sparsity Constraintsmentioning
confidence: 99%
“…We may add that results for different parameter choice rules do exist, e.g., for the discrepancy principle [1]. We also would like to mention convergence speed results, which require, e.g., source conditions for the solution [2,3,8,14,26,27,41,51]. Convergence and convergence rates for sparse regularization in a Bayes setup have been recently investigated in [25].…”
Section: Tikhonov Regularization With Sparsity Constraintsmentioning
confidence: 99%
“…For successful application of 1 -regularization, existence of minimizers x δ α to (1) and their stability with respect to perturbations in the data y δ have to be ensured. Further, by choosing the regularization parameter α > 0 in dependence on the noise level δ and the given data y δ , one has to guarantee that corresponding minimizers converge to a solution x † of (2) if the noise level goes to zero.…”
Section: Introductionmentioning
confidence: 99%
“…However, under the following assumption, the existence of a solution x 2 D.F / to (1) implies the existence of an x -minimum norm solution (see [107,Lemma 3.2] and if (1) has a solution in D.F /; ı n ! 0 is a sequence of noise levels with corresponding data y n D y ı n such that ky n yk Ä ı n , then every associated sequence fx n g of minimizers to (6) has a convergent subsequence, and all limit elements are x -minimum norm solutions x of (1).…”
Section: Tikhonov Regularization In Hilbert Spaces With Quadratic Mismentioning
confidence: 99%
“…Please refer to the papers [6,32,44,51,53,63,99] for a further discussion of alternative misfit functionals S. In most cases convex penalty functionals R are preferred. An important class of penalty functionals form the norm powers R.x/ WD kxk q Q X ; q > 0; x 2 Q X; where as an alternative to the standard case X D Q X the space Q X can also be chosen as a dense subspace of X with stronger norm, e.g., X D L q .…”
Section: Variational Regularization In Banach Spaces With Convex Penamentioning
confidence: 99%