2022
DOI: 10.48550/arxiv.2205.09322
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Hierarchical Ensemble Kalman Methods with Sparsity-Promoting Generalized Gamma Hyperpriors

Abstract: This paper introduces a computational framework to incorporate flexible regularization techniques in ensemble Kalman methods for nonlinear inverse problems. The proposed methodology approximates the maximum a posteriori (MAP) estimate of a hierarchical Bayesian model characterized by a conditionally Gaussian prior and generalized gamma hyperpriors. Suitable choices of hyperparameters yield sparsity-promoting regularization. We propose an iterative algorithm for MAP estimation, which alternates between updating… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
2

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 42 publications
0
5
0
Order By: Relevance
“…Moreover, future studies will address the possibility of exploiting recent computational frameworks incorporating effective sparsity-promoting regularization techniques (e.g. relying on efficient hierarchical ensemble Kalman methods [64] or variational Bayesian inference [65]) to improve the capabilities of the MR-BCS-CSI.…”
Section: Discussionmentioning
confidence: 99%
“…Moreover, future studies will address the possibility of exploiting recent computational frameworks incorporating effective sparsity-promoting regularization techniques (e.g. relying on efficient hierarchical ensemble Kalman methods [64] or variational Bayesian inference [65]) to improve the capabilities of the MR-BCS-CSI.…”
Section: Discussionmentioning
confidence: 99%
“…For all these algorithms based on a Gaussian ansatz, the accuracy depends on some measure of being close to Gaussian. Regarding the use of Gaussian approximations in Kalman inversion we highlight, in addition to the approximate Bayesian methods already cited, the use of ensemble Kalman methods for optimization: see [64,24,76,63,135]. Kalman filtering has also been used in combination with variational inference [78].…”
Section: Gaussian Approximationsmentioning
confidence: 99%
“…Ensemble-based optimization schemes often rely on statistical linearization to avoid the computation of derivatives. Underpinning this idea [95,19,52] is the argument that if G(u) = Au were linear, then C up = CA ⊤ , leading to the approximation in the general nonlinear case…”
Section: Ensemble Algorithms For Sequential Optimizationmentioning
confidence: 99%