The ensemble Kalman inversion is widely used in practice to estimate unknown parameters from noisy measurement data. Its low computational costs, straightforward implementation, and non-intrusive nature makes the method appealing in various areas of application. We present a complete analysis of the ensemble Kalman inversion with perturbed observations for a fixed ensemble size when applied to linear inverse problems. The well-posedness and convergence results are based on the continuous time scaling limits of the method. The resulting coupled system of stochastic differential equations allows to derive estimates on the long-time behaviour and provides insights into the convergence properties of the ensemble Kalman inversion. We view the method as a derivative free optimization method for the least-squares misfit functional, which opens up the perspective to use the method in various areas of applications such as imaging, groundwater flow problems, biological problems as well as in the context of the training of neural networks.AMS classification scheme numbers: 65N21, 62F15, 65N75, 65C30, 90C56 for Hilbert spaces (H 1 , ·, · H 1 ), (H 2 , ·, · H 2 ) and z 1 ∈ H 1 , z 2 ∈ H 2 . The empirical means are given bythe minimum of n and the first exit time of e s at radius n. Then, for all n ∈ N, from (14) (after rebasing the integration interval from [0, t] to [s, s + t]) we obtainAs τ n → ∞, applying Fatou's lemma on the left hand side and applying the monotone convergence theorem on the right hand side givesProof. By Lemma Appendix A.4 we can directly take expectations in (14) to obtains 2 ds.Note that by dropping the non-negative mixed terms j = k and by using Jensen's and Young's inequalityProof. The idea of this proof is based on Theorem 4.6.2 in [33]. We define the stochastic Lyapunov functionThe generator applied to V fulfillsis monotonically decreasing.Proof. The assertions follow by arguments similar to the proof of Proposition 4.11.Thus, for all t, s ≥ 0, it follows similarly to the proof of Lemma 4.1 that,...,J converges to zero almost surely as t → ∞. Proof. We define the Lyapunov function V (r, t) = t β 1 J J j=1 |r (j) | 2 and obtain LV (r, t) ≤ βt β−1 J J j=1 |r (j) | 2 − t β 1 J J j=1 r (j) , C(r) + 1 t α + R B r (j) .Thus, LV (r, t) ≤ 1 J J j=1 |r (j) | 2 β − λ min t t α + R t β−1 .
The Bayesian approach to inverse problems is widely used in practice to infer unknown parameters from noisy observations. In this framework, the ensemble Kalman inversion has been successfully applied for the quantification of uncertainties in various areas of applications. In recent years, a complete analysis of the method has been developed for linear inverse problems adopting an optimization viewpoint. However, many applications require the incorporation of additional constraints on the parameters, e.g. arising due to physical constraints. We propose a new variant of the ensemble Kalman inversion to include box constraints on the unknown parameters motivated by the theory of projected preconditioned gradient flows. Based on the continuous time limit of the constrained ensemble Kalman inversion, we discuss a complete convergence analysis for linear forward problems. We adopt techniques from filtering, such as variance inflation, which are crucial in order to improve the performance and establish a correct descent. These benefits are highlighted through a number of numerical examples on various inverse problems based on partial differential equations.
The Ensemble Kalman inversion (EKI) method is a method for the estimation of unknown parameters in the context of (Bayesian) inverse problems. The method approximates the underlying measure by an ensemble of particles and iteratively applies the ensemble Kalman update to evolve (the approximation of the) prior into the posterior measure. For the convergence analysis of the EKI it is common practice to derive a continuous version, replacing the iteration with a stochastic differential equation. In this paper we validate this approach by showing that the stochastic EKI iteration converges to paths of the continuous-time stochastic differential equation by considering both the nonlinear and linear setting, and we prove convergence in probability for the former, and convergence in moments for the latter. The methods employed can also be applied to the analysis of more general numerical schemes for stochastic differential equations in general.
Ensemble Kalman inversion (EKI) is a derivative-free optimizer aimed at solving inverse problems, taking motivation from the celebrated ensemble Kalman filter. The purpose of this article is to consider the introduction of adaptive Tikhonov strategies for EKI. This work builds upon Tikhonov EKI (TEKI) which was proposed for a fixed regularization constant. By adaptively learning the regularization parameter, this procedure is known to improve the recovery of the underlying unknown. For the analysis, we consider a continuous-time setting where we extend known results such as well-posdeness and convergence of various loss functions, but with the addition of noisy observations for the limiting stochastic differential equations (i.e. stochastic TEKI). Furthermore, we allow a time-varying noise and regularization covariance in our presented convergence result which mimic adaptive regularization schemes. In turn we present three adaptive regularization schemes, which are highlighted from both the deterministic and Bayesian approaches for inverse problems, which include bilevel optimization, the MAP formulation and covariance learning. We numerically test these schemes and the theory on linear and nonlinear partial differential equations, where they outperform the non-adaptive TEKI and EKI.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.