Ensemble Kalman inversion is a parallelizable methodology for solving inverse or parameter estimation problems. Although it is based on ideas from Kalman filtering, it may be viewed as a derivative-free optimization method. In its most basic form it regularizes ill-posed inverse problems through the subspace property: the solution found is in the linear span of the initial ensemble employed. In this work we demonstrate how further regularization can be imposed, incorporating prior information about the underlying unknown.In particular we study how to impose Tikhonov-like Sobolev penalties. As well as introducing this modified ensemble Kalman inversion methodology, we also study its continuous-time limit, proving ensemble collapse; in the language of multi-agent optimization this may be viewed as reaching consensus. We also conduct a suite of numerical experiments to highlight the benefits of Tikhonov regularization in the ensemble inversion context.
The use of ensemble methods to solve inverse problems is attractive because it is a derivative-free methodology which is also well-adapted to parallelization. In its basic iterative form the method produces an ensemble of solutions which lie in the linear span of the initial ensemble. Choice of the parameterization of the unknown field is thus a key component of the success of the method. We demonstrate how both geometric ideas and hierarchical ideas can be used to design effective parameterizations for a number of applied inverse problems arising in electrical impedance tomography, groundwater flow and source inversion. In particular we show how geometric ideas, including the level set method, can be used to reconstruct piecewise continuous fields, and we show how hierarchical methods can be used to learn key parameters in continuous fields, such as length-scales, resulting in improved reconstructions. Geometric and hierarchical ideas are combined in the level set method to find piecewise constant reconstructions with interfaces of unknown topology.PACS numbers: 62M20, 49N45, G5L09.
Many data-science problems can be formulated as an inverse problem, where the parameters are estimated by minimizing a proper loss function. When complicated black-box models are involved, derivative-free optimization tools are often needed. The ensemble Kalman filter (EnKF) is a particle-based derivative-free Bayesian algorithm originally designed for data assimilation. Recently, it has been applied to inverse problems for computational efficiency. The resulting algorithm, known as ensemble Kalman inversion (EKI), involves running an ensemble of particles with EnKF update rules so they can converge to a minimizer. In this article, we investigate EKI convergence in general nonlinear settings. To improve convergence speed and stability, we consider applying EKI with non-constant step-sizes and covariance inflation. We prove that EKI can hit critical points with finite steps in non-convex settings. We further prove that EKI converges to the global minimizer polynomially fast if the loss function is strongly convex. We verify the analysis presented with numerical experiments on two inverse problems.
The Bayesian approach to inverse problems is widely used in practice to infer unknown parameters from noisy observations. In this framework, the ensemble Kalman inversion has been successfully applied for the quantification of uncertainties in various areas of applications. In recent years, a complete analysis of the method has been developed for linear inverse problems adopting an optimization viewpoint. However, many applications require the incorporation of additional constraints on the parameters, e.g. arising due to physical constraints. We propose a new variant of the ensemble Kalman inversion to include box constraints on the unknown parameters motivated by the theory of projected preconditioned gradient flows. Based on the continuous time limit of the constrained ensemble Kalman inversion, we discuss a complete convergence analysis for linear forward problems. We adopt techniques from filtering, such as variance inflation, which are crucial in order to improve the performance and establish a correct descent. These benefits are highlighted through a number of numerical examples on various inverse problems based on partial differential equations.
This paper provides a unified perspective of iterative ensemble Kalman methods, a family of derivative-free algorithms for parameter reconstruction and other related tasks. We identify, compare and develop three subfamilies of ensemble methods that differ in the objective they seek to minimize and the derivative-based optimization scheme they approximate through the ensemble. Our work emphasizes two principles for the derivation and analysis of iterative ensemble Kalman methods: statistical linearization and continuum limits. Following these guiding principles, we introduce new iterative ensemble Kalman methods that show promising numerical performance in Bayesian inverse problems, data assimilation and machine learning tasks.
In this article we consider the linear filtering problem in continuous-time. We develop and apply multilevel Monte Carlo (MLMC) strategies for ensemble Kalman-Bucy filters (EnKBFs). These filters can be viewed as approximations of conditional McKean-Vlasov-type diffusion processes. They are also interpreted as the continuous-time analogue of the ensemble Kalman filter, which has proven to be successful due to its applicability and computational cost. We prove that our multilevel EnKBF can achieve a mean square error (MSE) of O( 2), > 0 with a cost of order O( −2 log( ) 2 ). This implies a reduction in cost compared to the (single level) EnKBF which requires a cost of O( −3 ) to achieve an MSE of O( 2). In order to prove this result we provide a Monte Carlo convergence and approximation bounds associated to time-discretized EnKBFs. To the best of our knowledge, these are the first set of Monte-Carlo type results associated with the discretized EnKBF. We test our theory on a linear problem, which we motivate through a relatively high-dimensional example of order ∼ 10 3 .
In this article we consider the linear filtering problem in continuous time. We develop and apply multilevel Monte Carlo (MLMC) strategies for ensemble Kalman-Bucy filters (EnKBFs). These filters can be viewed as approximations of conditional McKean-Vlasov-type diffusion processes. They are also interpreted as the continuous-time analogue of the ensemble Kalman filter , which has proven to be successful due to its applicability and computational cost. We prove that an ideal version of our multilevel EnKBF can achieve a mean square error (MSE) of O( 2), > 0, with a cost of order O( −2 log( ) 2 ). In order to prove this result we provide a Monte Carlo convergence and approximation bounds associated to time-discretized EnKBFs. This implies a reduction in cost compared to the (single level) EnKBF which requires a cost of O( −3 ) to achieve an MSE of O( 2). We test our theory on a linear Ornstein-Uhlenbeck process, which we motivate through highdimensional examples of order ∼ O(10 4 ) and O(10 5 ), where we also numerically test an alternative deterministic counterpart of the EnKBF.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.