2018
DOI: 10.1016/j.jcp.2018.08.057
|View full text |Cite
|
Sign up to set email alerts
|

Parallel tensor methods for high-dimensional linear PDEs

Abstract: High-dimensional partial-differential equations (PDEs) arise in a number of fields of science and engineering, where they are used to describe the evolution of joint probability functions. Their examples include the Boltzmann and Fokker-Planck equations. We develop new parallel algorithms to solve high-dimensional PDEs. The algorithms are based on canonical and hierarchical numerical tensor methods combined with alternating least squares and hierarchical singular value decomposition. Both implicit and explicit… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
34
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
8

Relationship

5
3

Authors

Journals

citations
Cited by 23 publications
(34 citation statements)
references
References 86 publications
0
34
0
Order By: Relevance
“…Implicit temporal integrators can mitigate this problem, but they require the development of linear solvers on tensor manifolds with constant rank. This can be achieved, e.g., by utilizing Riemannian optimization algorithms [49,44,45,23], or alternating least squares [14,29,43,6]. Let us discretize the spacial derivatives in (29) with second-order centered finite differences on a tensor product evenly-spaced grid in each variable.…”
Section: Stiffness In High-dimensional Pdesmentioning
confidence: 99%
See 1 more Smart Citation
“…Implicit temporal integrators can mitigate this problem, but they require the development of linear solvers on tensor manifolds with constant rank. This can be achieved, e.g., by utilizing Riemannian optimization algorithms [49,44,45,23], or alternating least squares [14,29,43,6]. Let us discretize the spacial derivatives in (29) with second-order centered finite differences on a tensor product evenly-spaced grid in each variable.…”
Section: Stiffness In High-dimensional Pdesmentioning
confidence: 99%
“…To avoid an undesirable growth of the tensor rank in time, we need to truncate u k back to a tensor manifold with constant rank. This operation is essentially a nonlinear projection which can be computed, e.g., by a sequence of matricizations followed by high-order singular value decomposition (HOSVD) [16,17,30], or by optimization [49,44,45,29,6,12,43,26].…”
Section: Introductionmentioning
confidence: 99%
“…We also provided sufficient conditions for consistency, stability and convergence of functional approximation schemes to compute the solution of FDEs, thus extending the well-known Lax-Richtmyer theorem from PDEs to FDEs. As we suggested in [69], these results open the possibility to utilize techniques for highdimensional model representation such as deep neural networks [52,53,79] and numerical tensor methods [17,3,55,7,59,37] to represent nonlinear functionals and compute approximate solutions to functional differential equations. We conclude by emphasizing that the results we obtained in this paper can be extended to real-or complex-valued functionals in compact Banach spaces (see, e.g., [33,65]).…”
Section: Discussionmentioning
confidence: 84%
“…where G is a nonlinear operator, B is a linear boundary operator, Ω is a bounded subset of R d which can be represented as a Cartesian products of d one-dimensional domains 9 , while u 0 (x) and h(x, t) are, respectively, the initial condition and the boundary condition. To compute the solution of (93) we substitute any of the tensor series expansion we discussed in Section 3, e.g., (55), into (93) and derive a coupled system of nonlinear evolution equations for the one-dimensional modes ψ (j)…”
Section: Dynamically Orthogonal Tensor Methods For High-dimensional Nmentioning
confidence: 99%
“…Note that L u and R u are self-adjont relative to (9) and (10), respectively. Moreover, if U u is compact (e.g., if we consider a decomposition in H 0 = L 2 ), then U † u is compact, and therefore L u and R u are compact.…”
Section: Recursive Bi-orthogonal Decomposition Of Time-independent Mumentioning
confidence: 99%