2016
DOI: 10.1137/15m1032909
|View full text |Cite
|
Sign up to set email alerts
|

Preconditioned Low-rank Riemannian Optimization for Linear Systems with Tensor Product Structure

Abstract: The numerical solution of partial differential equations on high-dimensional domains gives rise to computationally challenging linear systems. When using standard discretization techniques, the size of the linear system grows exponentially with the number of dimensions, making the use of classic iterative solvers infeasible. During the last few years, low-rank tensor approaches have been developed that allow to mitigate this curse of dimensionality by exploiting the underlying structure of the linear operator.… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
41
0

Year Published

2016
2016
2020
2020

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 35 publications
(41 citation statements)
references
References 53 publications
0
41
0
Order By: Relevance
“…In conclusion, we wish to point out that the low‐rank approximability characterization may be of use outside the scope of projection methods. For instance, the Riemannian optimization methods are designed to compute the best rank k approximation (in the sense of, e.g., Kressner et al, and Vandereycken and Vandewalle) to the solution of the matrix equation. This approach is effective only if k is small, that is, the solution is approximable by a low‐rank matrix, for which we have provided sufficient conditions.…”
Section: Discussionmentioning
confidence: 99%
“…In conclusion, we wish to point out that the low‐rank approximability characterization may be of use outside the scope of projection methods. For instance, the Riemannian optimization methods are designed to compute the best rank k approximation (in the sense of, e.g., Kressner et al, and Vandereycken and Vandewalle) to the solution of the matrix equation. This approach is effective only if k is small, that is, the solution is approximable by a low‐rank matrix, for which we have provided sufficient conditions.…”
Section: Discussionmentioning
confidence: 99%
“…6 GMRES was chosen because biconjugate gradient stabilized method (Bi-CG-STAB) 7 loses orthogonality quickly, though both methods are known to require good preconditioners for fast convergence. For d = 3, we also have the more restricted methods by Chen et al 1 and Beik et al 8 See also the error analysis for the related Galerkin method by Beckermann et al, 9 the preliminary results in the report by Kressner et al, 10 the Alternating Least Squares (ALS) method by Beylkin et al, 11,12 and the optimization approach by Espig et al 13 We may also include the alternating direction iterative (ADI) method by Mach et al, 14,15 although the reports have stated that the approach is not competitive against the density matrix renormalization group (DMRG) solver for tensor-train matrices by Oseledets et al 16 (which has not been proven to be convergent).…”
Section: Existing Methodsmentioning
confidence: 99%
“…For d = 1, Equation 1 degenerates to A 1 x = b, which can be solved by Gaussian elimination, (Bi-)CG, generalized minimal residual (GMRES), and many other methods efficiently. 3(Chapters 3,10) For d = 2, Equation 1 is the well-known Sylvester equation XA ⊤ 1 + A 2 X = b 2 b ⊤ 1 in the following form:…”
Section: Introductionmentioning
confidence: 99%
“…Indeed, the second term is equal to zero when scriptM is flat, that is, a linear subspace of the embedding Euclidean space (cf. Subsection 4.1 in the work of Kressner et al). Clearly, the main challenge in calculating the Riemannian Hessian in is the derivative of the projection operator.…”
Section: The Geometry Of Mr and Riemannian Optimizationmentioning
confidence: 99%