2022
DOI: 10.1016/j.jcp.2022.111559
|View full text |Cite
|
Sign up to set email alerts
|

Ensemble Kalman inversion for sparse learning of dynamical systems from time-averaged data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7

Relationship

3
4

Authors

Journals

citations
Cited by 16 publications
(12 citation statements)
references
References 77 publications
0
12
0
Order By: Relevance
“…On the other hand, UKI provides information about parametric uncertainty and correlations, which can be used to improve models at the process level, and to rapidly compare the added value of increasingly precise observing systems. Other ensemble Kalman methods, such as the sparsity‐inducing EKI (Schneider et al., 2020) or the ensemble Kalman sampler (Garbuno‐Inigo et al., 2020), can provide solutions to the inverse problem with other useful properties. In addition, all these ensemble methods generate parameter‐output pairs that can be used to train emulators for uncertainty quantification that can capture non‐Gaussian posteriors (Cleary et al., 2021).…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…On the other hand, UKI provides information about parametric uncertainty and correlations, which can be used to improve models at the process level, and to rapidly compare the added value of increasingly precise observing systems. Other ensemble Kalman methods, such as the sparsity‐inducing EKI (Schneider et al., 2020) or the ensemble Kalman sampler (Garbuno‐Inigo et al., 2020), can provide solutions to the inverse problem with other useful properties. In addition, all these ensemble methods generate parameter‐output pairs that can be used to train emulators for uncertainty quantification that can capture non‐Gaussian posteriors (Cleary et al., 2021).…”
Section: Discussionmentioning
confidence: 99%
“…The inset in Figure 8b shows how the higher‐complexity hybrid model moderately overfits to the training set after ∼10 epochs, a behavior that is not observed with the empirical model. Hence, in the low‐data regime ( d ≲ p ), adoption of techniques such as early stopping (Prechelt, 1998) or sparsity‐inducing regularization (Schneider et al., 2020) becomes necessary. The compact support property of EKI, which mandates that the solution be in the linear span of the initial ensemble, also regularizes the learned hybrid model with decreasing J ; for J = 50 < p overfitting is significantly reduced.…”
Section: Application To An Atmospheric Subgrid‐scale Modelmentioning
confidence: 99%
“…Methods for quantifying structural uncertainty are less well developed than those for parametric uncertainty, but an established approach is to model the structural error as a Gaussian process at the interface of model and data (Kennedy & O’Hagan, 2000). An alternative is to use Gaussian processes or other machine learning techniques—for example, neural networks or learning from a dictionary of candidate terms (Brunton et al., 2016; Schneider et al., 2021)—directly where structural model errors actually occur, for example, in the collision kernel. In our example, the direct correspondence of the collision and breakup kernels between Cloudy and PySDM allowed us to instead use a simple additive bias term.…”
Section: Summary and Discussionmentioning
confidence: 99%
“…One interpretation of this added noise is that it plays the role of an artificial corruption of scriptST)(θ;k ${\mathcal{S}}_{T}\left({\boldsymbol{\theta }}^{{\dagger}};k\right)$, with unbiased model error δk ${\delta }_{k}$ that plays the same role as additional observational noise (Kennedy & O’Hagan, 2001). One can obtain unbiased δk ${\delta }_{k}$ by inclusion of models for structural model error within scriptST ${\mathcal{S}}_{T}$, for example, learned error models that enforce conservation laws and sparsity (M. E. Levine & Stuart, 2021; Schneider et al., 2022). The inverse problem can be written as zk=scriptS(bold-italicθ;k)+γk,2emγkN)(0,Wk(normalΣ(θ)+normalΔ)WkT. ${\boldsymbol{z}}_{k}={\mathcal{S}}_{\infty }(\boldsymbol{\theta };k)+{\gamma }_{k},\qquad {\gamma }_{k}\sim N\left(0,{W}_{k}({\Sigma}(\boldsymbol{\theta })+{\Delta}){W}_{k}^{T}\right).$ …”
Section: Idealized Gcm and Experimental Setupmentioning
confidence: 99%
“…One interpretation of this added noise is that it plays the role of an artificial corruption of  † ; , with unbiased model error 𝐴𝐴 𝐴𝐴𝑘𝑘 that plays the same role as additional observational noise (Kennedy & O'Hagan, 2001). One can obtain unbiased 𝐴𝐴 𝐴𝐴𝑘𝑘 by inclusion of models for structural model error within 𝐴𝐴 𝑇𝑇 , for example, learned error models that enforce conservation laws and sparsity (M. E. Levine & Stuart, 2021;Schneider et al, 2022). The inverse problem 2 can be written as…”
Section: Synthetic Data and Noisementioning
confidence: 99%