2018
DOI: 10.1088/1361-6420/aaa998
|View full text |Cite
|
Sign up to set email alerts
|

Sampling-free Bayesian inversion with adaptive hierarchical tensor representations

Abstract: A sampling-free approach to Bayesian inversion with an explicit polynomial representation of the parameter densities is developed, based on an affineparametric representation of a linear forward model. This becomes feasible due to the complete treatment in function spaces, which requires an efficient model reduction technique for numerical computations.The advocated perspective yields the crucial benefit that error bounds can be derived for all occuring approximations, leading to provable convergence subject t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
20
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
1

Relationship

2
5

Authors

Journals

citations
Cited by 15 publications
(21 citation statements)
references
References 54 publications
1
20
0
Order By: Relevance
“…It should be noted that a functional adaptive evluation of the forward map allows for the derivation of an explicit adaptive Bayesian inversion with functional tensor representations as in [17]. The results of the present work lay the ground for a similar approach with a Gaussian prior assumption.…”
Section: Introductionmentioning
confidence: 62%
“…It should be noted that a functional adaptive evluation of the forward map allows for the derivation of an explicit adaptive Bayesian inversion with functional tensor representations as in [17]. The results of the present work lay the ground for a similar approach with a Gaussian prior assumption.…”
Section: Introductionmentioning
confidence: 62%
“…This is motivated by our previous work on adaptive low-rank approximations of solutions of parametric random PDEs with Adaptive Stochastic Galerkin FEM (ASGFEM, see e.g. [20,17]) and in particular the sampling-free Bayesian inversion presented in [18] where the setting of uniform random variables was examined. A generalization to the important case of Gaussian random variables turns out to be non-trivial from a computational point of view due to the difficulties of representing highly concentrated densities in a compressing tensor format which is required in order to cope with the high dimensionality of the problem.…”
Section: Overviewmentioning
confidence: 99%
“…We recall the general formalism and highlight the notation with the setup of Section 2 in mind. We closely follow the presentation in [18] and refer to [54,10,33] for a comprehensive overview.…”
Section: Applicationsmentioning
confidence: 99%
“…Approximation of the multivariate target distribution can be recommended for the following two cases: First, the quantity of interest may be very poorly representable in the TT format, and hence direct tensor product integration of the QoI, as suggested in [11], is not possible. The most remarkable example is the indicator function, which occurs in the computation of the probability of an event.…”
Section: Introductionmentioning
confidence: 99%