2009
DOI: 10.1088/0266-5611/26/2/025002
|View full text |Cite
|
Sign up to set email alerts
|

Numerical methods for the design of large-scale nonlinear discrete ill-posed inverse problems

Abstract: Design of experiments for discrete ill-posed problems is a relatively new area of research. While there has been some limited work concerning the linear case, little has been done to study design criteria and numerical methods for ill-posed nonlinear problems. We present an algorithmic framework for nonlinear experimental design with an efficient numerical implementation. The data are modeled as indirect noisy observations of the model collected via a set of plausible experiments.An inversion estimate based on… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
87
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 73 publications
(87 citation statements)
references
References 30 publications
0
87
0
Order By: Relevance
“…In [15,23,43], for instance, the authors consider bilevel optimisation for finite-dimensional Markov random field models. In inverse problems, the optimal inversion and experimental acquisition setup is discussed in the context of optimal model design in works by Haber, Horesh and Tenorio [25,26], as well as Ghattas et al [3,9]. Recently, parameter learning in the context of functional variational regularisation models (1.1) also entered the image processing community with works by the authors [10,22], Kunisch, Pock and co-workers [14,33], Chung et al [16] and Hintermüller et al [30].…”
Section: Introductionmentioning
confidence: 99%
“…In [15,23,43], for instance, the authors consider bilevel optimisation for finite-dimensional Markov random field models. In inverse problems, the optimal inversion and experimental acquisition setup is discussed in the context of optimal model design in works by Haber, Horesh and Tenorio [25,26], as well as Ghattas et al [3,9]. Recently, parameter learning in the context of functional variational regularisation models (1.1) also entered the image processing community with works by the authors [10,22], Kunisch, Pock and co-workers [14,33], Chung et al [16] and Hintermüller et al [30].…”
Section: Introductionmentioning
confidence: 99%
“…To further improve the precision of parameter estimates, model-based experimental design techniques have been developed with the aim to maximize the information content in experimental data (Franceschini and Macchietto, 2008;Pukelsheim, 1993). Theory and applications for the model-based optimal experimental design (OED) for well-posed problems have been addressed repeatedly in the last decades Macchietto, 2000, 2002;Franceschini and Macchietto, 2008;Körkel et al, 2004;Pukelsheim, 1993), applications to ill-posed problems can also be found (Bardow, 2008;Bitterlich and Knabner, 2003;Haber et al, 2008Haber et al, , 2010Lahmer, 2011;O'Sullivane, 1986). However, ill-posed problems have to be handled carefully as they yield a biased estimator.…”
Section: Introductionmentioning
confidence: 99%
“…It is important to point out that this analysis in the context of OED is not yet available in literature. Moreover, different regularization techniques for handling nonlinear ill-posed PE problems from singular value analysis point of view are discussed, namely, orthogonal decomposition based techniques (i.e., SsS and TSVD) (Burth et al, 1999;Golub and Van Loan,1996;Hansen, 1998;López et al, 2013;Marquardt, 1970;Xu, 1998) and the Tikhonov strategy (Bardow, 2008;Haber et al, 2008Haber et al, , 2010Hansen, 1998Hansen, , 2007Johansen, 1997;Tikhonov and Arsenin, 1977). These techniques are then applied to ill-posed OED problems.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Other than recent numerous exceptions (see [8,5,12,15,9,14] and references therein), optimal experimental design of ill-posed problems has been somewhat an under-researched topic. In the case of ill-posed problems, the selection of optimal weights for (8) is more difficult because this estimate is biased and its bias depends on the unknown m: bias( m) = −λ 2 C(w) −1 M m, where the inverse Fisher matrix is given by…”
Section: The Ill-posed Casementioning
confidence: 99%