2018
DOI: 10.3390/atmos9060213
|View full text |Cite
|
Sign up to set email alerts
|

Cluster Sampling Filters for Non-Gaussian Data Assimilation

Abstract: This paper presents a fully non-Gaussian filter for sequential data assimilation. The filter is named the "cluster sampling filter", and works by directly sampling the posterior distribution following a Markov Chain Monte-Carlo (MCMC) approach, while the prior distribution is approximated using a Gaussian Mixture Model (GMM). Specifically, a clustering step is introduced after the forecast phase of the filter, and the prior density function is estimated by fitting a GMM to the prior ensemble. Using the data li… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 14 publications
(18 citation statements)
references
References 40 publications
0
18
0
Order By: Relevance
“…For this reason, one prefers to use a different model to describe the prior error distribution. A Gaussian Mixture Model is frequently used to relax the Gaussian assumption over the forecast distribution [20][21][22][23].…”
Section: Ensemble Kalman Filters Based On Modified Cholesky Decomposimentioning
confidence: 99%
“…For this reason, one prefers to use a different model to describe the prior error distribution. A Gaussian Mixture Model is frequently used to relax the Gaussian assumption over the forecast distribution [20][21][22][23].…”
Section: Ensemble Kalman Filters Based On Modified Cholesky Decomposimentioning
confidence: 99%
“…Thus, alternatives to EnKF formulations are a must under such circumstances, and therefore, sampling methods based on Markov chain Monte Carlo (MCMC) methods can be exploited to successfully sample from posterior error distributions. In "Cluster Sampling Filters for Non-Gaussian Data Assimilation" [14], Attia et al propose filters which account for non-Gaussian errors in prior and observations. Furthermore, the convergence of MCMC is sped up by using Verlet integrators.…”
Section: Efficient Formulation and Implementation Of Data Assimilatiomentioning
confidence: 99%
“…Following the strategy described in [1], starting with a synthetic prior ensemble generated from a GMM with N c = 5. A GMM approximation of the true prior probability distribution is constructed using the EM algorithm.…”
Section: A One-dimensional Examplementioning
confidence: 99%
“…Cluster sampling filters (C HMC, and MC-C HMC) [1] are developed as extension of the Hamiltonian Monte-Carlo (HMC) sampling filter presented in [3] where the true (unknown) prior distribution is approximate using a Gaussian mixture model (GMM). Given the current computational power, it is natural to try to run Monte-Carlo simulations in parallel.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation