2017
DOI: 10.1137/16m1096372
|View full text |Cite
|
Sign up to set email alerts
|

Well-Posed Bayesian Inverse Problems with Infinitely Divisible and Heavy-Tailed Prior Measures

Abstract: We present a new class of prior measures in connection to p regularization techniques when p ∈ (0, 1) which is based on the generalized Gamma distribution. We show that the resulting prior measure is heavy-tailed, non-convex and infinitely divisible. Motivated by this observation we discuss the class of infinitely divisible prior measures and draw a connection between their tail behavior and the tail behavior of their Lévy measures. Next, we use the laws of pure jump Lévy processes in order to define new class… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
36
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
10

Relationship

1
9

Authors

Journals

citations
Cited by 27 publications
(38 citation statements)
references
References 44 publications
2
36
0
Order By: Relevance
“…Stability of the posterior with respect to the observed data y and the log-likelihood Φ was established for Gaussian priors by Stuart (2010) and for more general priors by many later contributions (Dashti et al, 2012;Hosseini, 2017;Hosseini and Nigam, 2017;Sullivan, 2017). (We note in passing that the stability of BIPs with respect to perturbation of the prior is possible but much harder to establish, particularly when the data y are highly informative and the normalisation constant Z(y) is close to zero; see e.g.…”
Section: Bayesian Inverse Problemsmentioning
confidence: 99%
“…Stability of the posterior with respect to the observed data y and the log-likelihood Φ was established for Gaussian priors by Stuart (2010) and for more general priors by many later contributions (Dashti et al, 2012;Hosseini, 2017;Hosseini and Nigam, 2017;Sullivan, 2017). (We note in passing that the stability of BIPs with respect to perturbation of the prior is possible but much harder to establish, particularly when the data y are highly informative and the normalisation constant Z(y) is close to zero; see e.g.…”
Section: Bayesian Inverse Problemsmentioning
confidence: 99%
“…We now show that by Theorem 10 we obtain well-posedness w.r.t. the Wasserstein distance under the same basic assumption on Φ or , respectively, stated in [7,38] as well as slightly modified in [21,22,40] for establishing well-posedness w.r.t. the Hellinger distance.…”
Section: Remark 13 (Proofs Via Couplingsmentioning
confidence: 99%
“…Recently, several hierarchical models, which promote more versatile behaviors, have been developed. These include, for example, deep Gaussian processes (Dunlop et al, 2018;Emzir et al, 2020), level-set methods (Dunlop et al, 2017), mixtures of compound Poisson processes and Gaussians (Hosseini, 2017), and stacked Matérn fields via stochastic partial differential equations (Roininen et al, 2019). The problem with hierarchical priors is that in the posteriors, the parameters and hyperparameters may become strongly coupled, which means that vanilla MCMC methods become problematic and, for example, reparameterizations are needed for sampling the posterior efficiently (Chada et al, 2019;Monterrubio-Gómez et al, 2020).…”
Section: Literature Reviewmentioning
confidence: 99%