2017
DOI: 10.1088/1361-6420/aa6cbd
|View full text |Cite
|
Sign up to set email alerts
|

A data-scalable randomized misfit approach for solving large-scale PDE-constrained inverse problems

Abstract: Abstract. A randomized misfit approach is presented for the efficient solution of large-scale PDE-constrained inverse problems with high-dimensional data. The purpose of this paper is to offer a theory-based framework for random projections in this inverse problem setting. The stochastic approximation to the misfit is analyzed using random projection theory. By expanding beyond mean estimator convergence, a practical characterization of randomized misfit convergence can be achieved. The theoretical results dev… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
15
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 10 publications
(15 citation statements)
references
References 79 publications
0
15
0
Order By: Relevance
“…This section considers a particular Monte Carlo approximation Φ N of a quadratic potential Φ, proposed by Nemirovski et al (2008); Shapiro et al (2009), and further applied and analysed in the context of BIPs by Le et al (2017). This approximation is particularly useful when the data y ∈ R J has very high dimension, so that one does not wish to interrogate every component of the data vector y, or evaluate every component of the model prediction G(u) and compare it with the corresponding component of y.…”
Section: Application: Randomised Misfit Modelsmentioning
confidence: 99%
See 2 more Smart Citations
“…This section considers a particular Monte Carlo approximation Φ N of a quadratic potential Φ, proposed by Nemirovski et al (2008); Shapiro et al (2009), and further applied and analysed in the context of BIPs by Le et al (2017). This approximation is particularly useful when the data y ∈ R J has very high dimension, so that one does not wish to interrogate every component of the data vector y, or evaluate every component of the model prediction G(u) and compare it with the corresponding component of y.…”
Section: Application: Randomised Misfit Modelsmentioning
confidence: 99%
“…This task involves the solution of a large-scale optimisation problem involving Φ in the objective function, which is typically done using inexact Newton methods. It is shown by Le et al (2017) that the required number of evaluations of the forward model G and its adjoint is drastically reduced when using the randomised misfit Φ N as opposed to using the true misfit Φ, approximately by a factor of J N . The aim of this section is to show that the use of the randomised misfit Φ N does not only lead to the MAP estimate being well-approximated, but in fact the whole Bayesian posterior distribution.…”
Section: Application: Randomised Misfit Modelsmentioning
confidence: 99%
See 1 more Smart Citation
“…In particular, one can think of random misfit models, in which the likelihood is computed only on a random subset of the data (see e.g. [27,30]), or of multiscale models, where fast-scale effects can be modelled as random and one infers an effective slow-scale map from multiscale data [18]. Another possible application is given by probabilistic numerical methods (see e.g.…”
Section: Introductionmentioning
confidence: 99%
“…[4,5] and the references therein. In other cases, randomisation is used to reduce the computational cost of an existing method, for example in Markov chain Monte Carlo [6,7,8], sampling the posterior [9], or dimension reduction for solving inverse problems [10]. The results that we present below are motivated by the use of randomisation in problems where computation with the exact likelihood or forward model is not computationally efficient or feasible, for example the use of Gaussian process approximations of the negative log-likelihood and forward model [11].…”
Section: Introductionmentioning
confidence: 99%