Abstract.The Ensemble Kalman filter (EnKF) was introduced by Evensen in 1994 [10] as a novel method for data assimilation: state estimation for noisily observed timedependent problems. Since that time it has had enormous impact in many application domains because of its robustness and ease of implementation, and numerical evidence of its accuracy. In this paper we propose the application of an iterative ensemble Kalman method for the solution of a wide class of inverse problems. In this context we show that the estimate of the unknown function that we obtain with the ensemble Kalman method lies in a subspace A spanned by the initial ensemble. Hence the resulting error may be bounded above by the error found from the best approximation in this subspace. We provide numerical experiments which compare the error incurred by the ensemble Kalman method for inverse problems with the error of the best approximation in A, and with variants on traditional least-squares approaches, restricted to the subspace A. In so doing we demonstrate that the the ensemble Kalman method for inverse problems provides a derivative-free optimization method with comparable accuracy to that achieved by traditional least-squares approaches.Furthermore, we also demonstrate that the accuracy is of the same order of magnitude as that achieved by the best approximation. Three examples are used to demonstrate these assertions: inversion of a compact linear operator; inversion of piezometric head to determine hydraulic conductivity in a Darcy model of groundwater flow; and inversion of Eulerian velocity measurements at positive times to determine the initial condition in an incompressible fluid.
Abstract. We introduce a derivative-free computational framework for approximating solutions to nonlinear PDE-constrained inverse problems. The general aim is to merge ideas from iterative regularization with ensemble Kalman methods from Bayesian inference to develop a derivative-free stable method easy to implement in applications where the PDE (forward) model is only accessible as a black box (e.g. with commercial software). The proposed regularizing ensemble Kalman method can be derived as an approximation of the regularizing Levenberg-Marquardt (LM) scheme [14] in which the derivative of the forward operator and its adjoint are replaced with empirical covariances from an ensemble of elements from the admissible space of solutions. The resulting ensemble method consists of an update formula that is applied to each ensemble member and that has a regularization parameter selected in a similar fashion to the one in the LM scheme. Moreover, an early termination of the scheme is proposed according to a discrepancy principle-type of criterion. The proposed method can be also viewed as a regularizing version of standard Kalman approaches which are often unstable unless ad-hoc fixes, such as covariance localization, are implemented.The aim of this paper is to provide a detailed numerical investigation of the regularizing and convergence properties of the proposed regularizing ensemble Kalman scheme; the proof of these properties is an open problem. By means of numerical experiments, we investigate the conditions under which the proposed method inherits the regularizing properties of the LM scheme of [14] and is thus stable and suitable for its application in problems where the computation of the Fréchet derivative is not computationally feasible. More concretely, we study the effect of ensemble size, number of measurements, selection of initial ensemble and tunable parameters on the performance of the method. The numerical investigation is carried out with synthetic experiments on two model inverse problems: (i) identification of conductivity on a Darcy flow model and (ii) electrical impedance tomography with the complete electrode model. We further demonstrate the potential application of the method in solving shape identification problems that arises from the aforementioned forward models by means of a level-set approach for the parameterization of unknown geometries.
We introduce a level set based approach to Bayesian geometric inverse problems. In these problems the interface between different domains is the key unknown, and is realized as the level set of a function. This function itself becomes the object of the inference. Whilst the level set methodology has been widely used for the solution of geometric inverse problems, the Bayesian formulation that we develop here contains two significant advances: firstly it leads to a well-posed inverse problem in which the posterior distribution is Lipschitz with respect to the observed data; and secondly it leads to computationally expedient algorithms in which the level set itself is updated implicitly via the MCMC methodology applied to the level set functionno explicit velocity field is required for the level set interface. Applications are numerous and include medical imaging, modelling of subsurface formations and the inverse source problem; our theory is illustrated with computational results involving the last two applications.
The level set approach has proven widely successful in the study of inverse problems for interfaces, since its systematic development in the 1990s. Recently it has been employed in the context of Bayesian inversion, allowing for the quantification of uncertainty within the reconstruction of interfaces. However the Bayesian approach is very sensitive to the length and amplitude scales in the prior probabilistic model. This paper demonstrates how the scale-sensitivity can be circumvented by means of a hierarchical approach, using a single scalar parameter. Together with careful consideration of the development of algorithms which encode probability measure equivalences as the hierarchical parameter is varied, this leads to well-defined Gibbs based MCMC methods found by alternating Metropolis-Hastings updates of the level set function and the hierarchical parameter. These methods demonstrably outperform non-hierarchical Bayesian level set methods. Inverse problems for interfaces and Level set inversion and Hierarchical Bayesian methods arXiv:1601.03605v2 [math.PR]
We propose the application of iterative regularization for the development of ensemble methods for solving Bayesian inverse problems. In concrete, we construct (i) a variational iterative regularizing ensemble Levenberg-Marquardt method (IR-enLM) and (ii) a derivative-free iterative ensemble Kalman smoother (IR-ES). The aim of these methods is to provide a robust ensemble approximation of the Bayesian posterior. The proposed methods are based on fundamental ideas from iterative regularization methods that have been widely used for the solution of deterministic inverse problems [22]. In this work we are interested in the application of the proposed ensemble methods for the solution of Bayesian inverse problems that arise in reservoir modeling applications. The proposed ensemble methods use key aspects of the regularizing Levenberg-Marquardt scheme developed by Hanke [17] and that we recently applied for history matching in [19]. Unlike standard methods where the stopping criteria and regularization parameters are typically selected heuristically, in the proposed ensemble methods the discrepancy principle is applied for (i) the selection of the regularization parameters and (ii) the early termination of the scheme. The discrepancy principle is key for the theory of iterative regularization and the purpose of the present work is to apply this principle for the development of ensemble methods defined as iterative updates of solutions to linear ill-posed inverse problems.The regularizing and convergence properties of iterative regularization methods for deterministic inverse problems have long been established. However, the approximation properties of the proposed ensemble methods in the context of Bayesian inverse problems is an open problem. In the case where the forward operator is linear and the prior is Gaussian, we show that the tunable parameters of the proposed IR-enLM and IR-ES can be chosen so that the resulting schemes coincide with the standard randomized maximum likelihood (RML) and the ensemble smoother (ES), respectively. Therefore, the proposed methods sample from the posterior in the linearGaussian case. Similar to RML and ES methods, in the nonlinear case, one may not conclude that the proposed methods produce samples from the posterior. The present work provides a numerical investigation of the performance of the proposed ensemble methods at capturing the posterior. In particular, we aim at understanding the role of the tunable parameters that arise from the application of iterative regularization techniques. The numerical framework for our investigations consists of using a stateof-the art MCMC method for resolving the Bayesian posterior from synthetic experiments. The resolved posterior via MCMC then provides a gold standard against to which compare the proposed IR-enLM and IR-ES. Our numerical experiments show clear indication that the regularizing properties of the regularization methods applied for the computation of each ensemble have significant impact of the approximation propertie...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.