2017
DOI: 10.1088/1361-6420/33/7/074004
|View full text |Cite
|
Sign up to set email alerts
|

Learning regularization parameters for general-form Tikhonov

Abstract: In this work we consider the problem of finding optimal regularization parameters for general-form Tikhonov regularization using training data. We formulate the general-form Tikhonov solution as a spectral filtered solution using the generalized singular value decomposition of the matrix of the forward model and a given regularization matrix. Then, we find the optimal regularization parameter by minimizing the average of the errors between the filtered solutions and the true data. We extend the approach to the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
59
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 37 publications
(60 citation statements)
references
References 40 publications
(98 reference statements)
1
59
0
Order By: Relevance
“…In inverse problems, the optimal inversion and experimental acquisition setup is discussed in the context of optimal model design in works by Haber, Horesh and Tenorio [25,26], as well as Ghattas et al [3,9]. Recently, parameter learning in the context of functional variational regularisation models (1.1) also entered the image processing community with works by the authors [10,22], Kunisch, Pock and co-workers [14,33], Chung et al [16] and Hintermüller et al [30].…”
Section: Introductionmentioning
confidence: 99%
“…In inverse problems, the optimal inversion and experimental acquisition setup is discussed in the context of optimal model design in works by Haber, Horesh and Tenorio [25,26], as well as Ghattas et al [3,9]. Recently, parameter learning in the context of functional variational regularisation models (1.1) also entered the image processing community with works by the authors [10,22], Kunisch, Pock and co-workers [14,33], Chung et al [16] and Hintermüller et al [30].…”
Section: Introductionmentioning
confidence: 99%
“…However, (103) is sufficient to explain the majority of current state-of-the-art parameter learning approaches in the context of inverse problems. These cover the finitedimensional Markov random field models proposed in [325,346,143,124,334], the optimal model design approaches in [199,198,65,40], the optimal regularization parameter estimation in variational regularization [89,128,137,138,90,127], to training optimal operators in regularization functionals [123,122], reaction diffusion process [125,121], so-called variational networks [202,247,244] and other works related to image processing [301,214].…”
Section: Learningmentioning
confidence: 99%
“…Definition 3. 5 We say that (y * , u * ) satisfies the second order sufficient optimality conditions of (P α,y δ ), if there exists λ * ∈ Z and η > 0 such that (y * , u * , λ * ) is a KKT point and…”
Section: Optimality Conditionsmentioning
confidence: 99%