2016
DOI: 10.1175/mwr-d-14-00292.1
|View full text |Cite
|
Sign up to set email alerts
|

Efficient Kernel-Based Ensemble Gaussian Mixture Filtering

Abstract: The Bayesian filtering problem for data assimilation is considered following the kernel-based ensemble Gaussian mixture filtering (EnGMF) approach introduced by Anderson and Anderson. In this approach, the posterior distribution of the system state is propagated with the model using the ensemble Monte Carlo method, providing a forecast ensemble that is then used to construct a prior Gaussian mixture (GM) based on the kernel density estimator. This results in two update steps: a Kalman filter (KF)-like update o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
13
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
7
1

Relationship

6
2

Authors

Journals

citations
Cited by 26 publications
(13 citation statements)
references
References 61 publications
0
13
0
Order By: Relevance
“…We assume that Hnx and Hnz are linear for simplicity, but the proposed ensemble schemes can be easily extended to the case of nonlinear observation operators as discussed for example in Liu et al . (2016). The model noise terms, ηx={ηnx}n and ηz={ηnz}n, and the observation noise terms, εx={εnx}n and εz={εnz}n, are assumed to be independent, jointly independent and independent of the initial states x 0 and z 0 .…”
Section: The Classical Enkf For Owc Systemsmentioning
confidence: 99%
“…We assume that Hnx and Hnz are linear for simplicity, but the proposed ensemble schemes can be easily extended to the case of nonlinear observation operators as discussed for example in Liu et al . (2016). The model noise terms, ηx={ηnx}n and ηz={ηnz}n, and the observation noise terms, εx={εnx}n and εz={εnz}n, are assumed to be independent, jointly independent and independent of the initial states x 0 and z 0 .…”
Section: The Classical Enkf For Owc Systemsmentioning
confidence: 99%
“…Therefore, the covariance of each kernel is estimated from the inter-model differences even though autocorrelation of the individual models is lost. This is a very common choice in kernel-based probability density approximations (Liu et al, 2016;Silverman, 1986).…”
Section: Kernel Modelmentioning
confidence: 99%
“…Even though systematic errors in the system and model noise (issue (i)) may be partially treated using well-known parameter estimation techniques (e.g., Dee, 2005;Gharamti et al, 2015;Dreano et al, 2017;Ait-El-Fquih and Hoteit, 2018;Sakov et al, 2018), and those in the filter (issue (ii)) by for instance relaxing the Gaussian assumption made on the analysis pdf to a Gaussian mixture through the use of an ensemble Gaussian mixture filter (e.g., Hoteit et al, 2008;Frei and Künsch, 2013;Liu et al, 2015), sampling errors are inevitable. Many applications have demonstrated that the EnKF can tolerate sampling errors by applying auxiliary techniques, the most standard of which are covariance inflation (Anderson, 2001) and covariance localization (Houtekamer and Mitchell, 1998); other techniques have also been proposed, for example, Hamill and Snyder (2000), Song et al (2010), and Luo and Hoteit (2011).…”
Section: Introductionmentioning
confidence: 99%