2019
DOI: 10.1007/s11222-019-09910-z
|View full text |Cite
|
Sign up to set email alerts
|

Approximation and sampling of multivariate probability distributions in the tensor train decomposition

Abstract: General multivariate distributions are notoriously expensive to sample from, particularly the high-dimensional posterior distributions in PDE-constrained inverse problems. This paper develops a sampler for arbitrary continuous multivariate distributions that is based on low-rank surrogates in the tensor-train format, a methodology that has been exploited for many years for scalable, high-dimensional density function approximation in quantum physics and chemistry. We build upon recent developments of the cross … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
68
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1

Relationship

2
4

Authors

Journals

citations
Cited by 55 publications
(68 citation statements)
references
References 48 publications
0
68
0
Order By: Relevance
“…To overcome this cost, Dolgov et al [ 25 ] precomputed an approximation of in a compressed tensor train representation that allows the fast computation of integrals in ( 17 ) and subsequent simulation of the inverse Rosenblatt transformation from the conditionals in ( 17 ) and showed that computational cost scales linearly with dimension d . Practical examples presented in [ 25 ], in dimension , demonstrate that operation by the forward and inverse Rosenblatt transformations is computationally feasible for multivariate problems with no special structure.…”
Section: Summary and Discussionmentioning
confidence: 99%
See 3 more Smart Citations
“…To overcome this cost, Dolgov et al [ 25 ] precomputed an approximation of in a compressed tensor train representation that allows the fast computation of integrals in ( 17 ) and subsequent simulation of the inverse Rosenblatt transformation from the conditionals in ( 17 ) and showed that computational cost scales linearly with dimension d . Practical examples presented in [ 25 ], in dimension , demonstrate that operation by the forward and inverse Rosenblatt transformations is computationally feasible for multivariate problems with no special structure.…”
Section: Summary and Discussionmentioning
confidence: 99%
“…These induce pseudo-random and quasi-Monte Carlo sequences, respectively, on the space X via the inverse Rosenblatt transformation [ 29 ]. Both these schemes were demonstrated in practical high-dimensional settings in [ 25 ].…”
Section: Summary and Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…The recently developed transport map idea, e.g. [4,17,39,43,50], offers new insights for this task by identifying a measurable mapping, T : U → X , such that the pushforward of μ, denoted by T μ, is a close approximation to ν π . Then, the mapping T can be used to either accelerate classical sampling methods such as MCMC or to improve the efficiency of importance sampling.…”
Section: Introductionmentioning
confidence: 99%