Data assimilation combines information from models, measurements, and priors to estimate the state of a dynamical system such as the atmosphere. The Ensemble Kalman filter (EnKF) is a family of ensemble-based data assimilation approaches that has gained wide popularity due its simple formulation, ease of implementation, and good practical results. Most EnKF algorithms assume that the underlying probability distributions are Gaussian. Although this assumption is well accepted, it is too restrictive when applied to large nonlinear models, nonlinear observation operators, and large levels of uncertainty. Several approaches have been proposed in order to avoid the Gaussianity assumption. One of the most successful strategies is the maximum likelihood ensemble filter (MLEF) which computes a maximum a posteriori estimate of the state assuming the posterior distribution is Gaussian. MLEF is designed to work with nonlinear and even non-differentiable observation operators, and shows good practical performance. However, there are limits to the degree of nonlinearity that MLEF can handle. This paper proposes a new ensemble-based data assimilation method, named the "sampling filter ", which obtains the analysis by sampling directly from the posterior distribution. The sampling strategy is based on a Hybrid Monte Carlo (HMC) approach that can handle non-Gaussian probability distributions. Numerical experiments are carried out using the Lorenz-96 model and observation operators with different levels of non-linearity and differentiability. The proposed filter is also tested with shallow water model on a sphere with linear observation operator. The results show that the sampling filter can perform well even in highly nonlinear situations were EnKF and MLEF filters diverge.
Summary This paper constructs an ensemble‐based sampling smoother for four‐dimensional data assimilation using a Hybrid/Hamiltonian Monte‐Carlo approach. The smoother samples efficiently from the posterior probability density of the solution at the initial time. Unlike the well‐known ensemble Kalman smoother, which is optimal only in the linear Gaussian case, the proposed methodology naturally accommodates non‐Gaussian errors and nonlinear model dynamics and observation operators. Unlike the four‐dimensional variational method, which only finds a mode of the posterior distribution, the smoother provides an estimate of the posterior uncertainty. One can use the ensemble mean as the minimum variance estimate of the state or can use the ensemble in conjunction with the variational approach to estimate the background errors for subsequent assimilation windows. Numerical results demonstrate the advantages of the proposed method compared to the traditional variational and ensemble‐based smoothing methods. Copyright © 2016 John Wiley & Sons, Ltd.
We develop a framework for goal-oriented optimal design of experiments (GOODE) for largescale Bayesian linear inverse problems governed by PDEs. This framework differs from classical Bayesian optimal design of experiments (ODE) in the following sense: we seek experimental designs that minimize the posterior uncertainty in the experiment end-goal, e.g., a quantity of interest (QoI), rather than the estimated parameter itself. This is suitable for scenarios in which the solution of an inverse problem is an intermediate step and the estimated parameter is then used to compute a QoI. In such problems, a GOODE approach has two benefits: the designs can avoid wastage of experimental resources by a targeted collection of data, and the resulting design criteria are computationally easier to evaluate due to the often low-dimensionality of the QoIs. We present two modified design criteria, A-GOODE and D-GOODE, which are natural analogues of classical Bayesian A-and D-optimal criteria. We analyze the connections to other ODE criteria, and provide interpretations for the GOODE criteria by using tools from information theory. Then, we develop an efficient gradient-based optimization framework for solving the GOODE optimization problems. Additionally, we present comprehensive numerical experiments testing the various aspects of the presented approach. The driving application is the optimal placement of sensors to identify the source of contaminants in a diffusion and transport problem. We enforce sparsity of the sensor placements using an 1 -norm penalty approach, and propose a practical strategy for specifying the associated penalty parameter.
SUMMARYHybrid Monte Carlo sampling smoother is a fully non-Gaussian four-dimensional data assimilation algorithm that works by directly sampling the posterior distribution formulated in the Bayesian framework. The smoother in its original formulation is computationally expensive owing to the intrinsic requirement of running the forward and adjoint models repeatedly. Here we present computationally efficient versions of the hybrid Monte Carlo sampling smoother based on reduced-order approximations of the underlying model dynamics. The schemes developed herein are tested numerically using the shallow-water equations model on Cartesian coordinates. The results reveal that the reduced-order versions of the smoother are capable of accurately capturing the posterior probability density, while being significantly faster than the original full-order formulation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.