2018
DOI: 10.1088/1361-6420/aade77
|View full text |Cite
|
Sign up to set email alerts
|

A bilevel approach for parameter learning in inverse problems

Abstract: A learning approach to selecting regularization parameters in multi-penalty Tikhonov regularization is investigated. It leads to a bilevel optimization problem, where the lower level problem is a Tikhonov regularized problem parameterized in the regularization parameters. Conditions which ensure the existence of solutions to the bilevel optimization problem of interest are derived, and these conditions are verified for two relevant examples. Difficulties arising from the possible lack of convexity of the lower… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
42
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 40 publications
(42 citation statements)
references
References 27 publications
0
42
0
Order By: Relevance
“…We derive necessary optimality conditions for the learning problem (BP). Here we essentially follow the discussion provided in [25,Section 5]. Throughout this section it is assumed that the function e describing the state constraint is well defined and at least once continuously F-differentiable on Y × H s (Ω).…”
Section: Optimality Conditionsmentioning
confidence: 99%
See 1 more Smart Citation
“…We derive necessary optimality conditions for the learning problem (BP). Here we essentially follow the discussion provided in [25,Section 5]. Throughout this section it is assumed that the function e describing the state constraint is well defined and at least once continuously F-differentiable on Y × H s (Ω).…”
Section: Optimality Conditionsmentioning
confidence: 99%
“…Learning strategies for choosing regularization parameters in the context of multi-penalty Tikhonov regularization are investigated e.g. in [28,14,11,25]. The problem of learning the discrepancy function is considered in [15].…”
mentioning
confidence: 99%
“…For a given observation model U O , one would theoretically expect to retrieve some optimal parameterization Φ of regularization term U R such that the reconstruction error for the true state is truly minimized. This typically leads to considering the following bi-level optimization problem [2]…”
Section: Problem Statement and Related Workmentioning
confidence: 99%
“…We benefit from the considered end-to-end architecture to learn jointly all parameters, that is to say that we jointly learn the variational cost U and the associated solver so that we minimize the reconstruction error as targeted in (2). As such, we may learn a regularization term adapted to the available observation setting.…”
Section: Learning Schemementioning
confidence: 99%
“…In this paper we tackle the optimal placement problem using a bilevel learning approach [17,12,21]. In contrast to optimal experimental design strategies, our framework allows us to work with different quality measures and is not restricted to the A-, D-or E-optimal experimental design paradigms [31].…”
Section: Introductionmentioning
confidence: 99%