2015
DOI: 10.1137/140960980
|View full text |Cite
|
Sign up to set email alerts
|

Hierarchical Tensor Approximation of Output Quantities of Parameter-Dependent PDEs

Abstract: uncertainty quantification or optimisation. In many cases, one is interested in scalar output quantities induced by the parameter-dependent solution. The output can be interpreted as a tensor living on a high-dimensional parameter space. Our aim is to adaptively construct an approximation of this tensor in a data-sparse hierarchical tensor format. Once this approximation from an offline computation is available, the evaluation of the output for any parameter value becomes a cheap online task. Moreover, the exp… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
75
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 43 publications
(76 citation statements)
references
References 30 publications
1
75
0
Order By: Relevance
“…To find such a tensor, low‐rank techniques like the singular value decomposition or the cross approximation were generalized to be applicable to the scriptH‐tensor format, cf. Grasedyck et al…”
Section: Projection‐based Model Order Reductionmentioning
confidence: 99%
See 2 more Smart Citations
“…To find such a tensor, low‐rank techniques like the singular value decomposition or the cross approximation were generalized to be applicable to the scriptH‐tensor format, cf. Grasedyck et al…”
Section: Projection‐based Model Order Reductionmentioning
confidence: 99%
“…More precisely, when scriptA is given in the hierarchical Tucker decomposition scriptO()dk3+dkn operations are needed to compute the mean. See Ballani and Grasedyck for a detailed elaboration.…”
Section: Projection‐based Model Order Reductionmentioning
confidence: 99%
See 1 more Smart Citation
“…As an alternative, given sufficient properties of the parameter to solution map, surrogate models can be employed. Here we will exploit two properties, first the regularity will be exploited by a tensorized Chebyshef polynomial basis and second the low-rank approximability by the Hierarchical Tucker format, similar to [3].…”
Section: Introductionmentioning
confidence: 99%
“…Compute I >k−1 by the maxvol algorithm and (optionally) truncate. This process can be also organized in a form of a binary tree, which gives rise to the so-called hierarchical Tucker cross algorithm [1]. In total, we need O(dnr 2 ) evaluations of ξ and O(dnr 3 ) additional operations in computations of the maximum volume matrices.…”
mentioning
confidence: 99%