2019
DOI: 10.1016/j.cma.2018.12.015
|View full text |Cite
|
Sign up to set email alerts
|

A continuous analogue of the tensor-train decomposition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
87
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
7

Relationship

2
5

Authors

Journals

citations
Cited by 67 publications
(87 citation statements)
references
References 44 publications
0
87
0
Order By: Relevance
“…Consequently, MC methods are often computationally intractable for high‐fidelity simulation models because they require a prohibitively large number of evaluations to obtain moderate accuracy in response statistics. In contrast, surrogate methods such as polynomial chaos, Gaussian processes, low‐rank decompositions, and sparse grid interpolation can be used to build an approximation of the input‐output response, often at a fraction of the cost of MC sampling. Once the surrogate has been constructed, various uncertainty quantification (UQ) tasks, such as sensitivity analysis, density estimation, etc., can then be performed on the approximation at negligible cost *…”
Section: Introductionmentioning
confidence: 99%
“…Consequently, MC methods are often computationally intractable for high‐fidelity simulation models because they require a prohibitively large number of evaluations to obtain moderate accuracy in response statistics. In contrast, surrogate methods such as polynomial chaos, Gaussian processes, low‐rank decompositions, and sparse grid interpolation can be used to build an approximation of the input‐output response, often at a fraction of the cost of MC sampling. Once the surrogate has been constructed, various uncertainty quantification (UQ) tasks, such as sensitivity analysis, density estimation, etc., can then be performed on the approximation at negligible cost *…”
Section: Introductionmentioning
confidence: 99%
“…These sketching methods also naturally extend to out-of-core algorithms, and may even be used to increase scalability in distributed memory. Finally, it is natural to consider application of randomization to other decompositions such as Tucker [38], tensor train [28], or functional tensor decompositions [11,17].…”
Section: Discussionmentioning
confidence: 99%
“…To achieve 1), one can for instance store the rows and columns of the Vandermonde matrix which are called with highest probability. To achieve 2) one can exploit further insight of the functions, for instance using a low-rank approximation [11] or functional analogues of tensor decomposition approximation [10].…”
Section: Discussionmentioning
confidence: 99%