2019
DOI: 10.1016/j.jcp.2019.07.003
|View full text |Cite
|
Sign up to set email alerts
|

Low-rank Riemannian eigensolver for high-dimensional Hamiltonians

Abstract: Such problems as computation of spectra of spin chains and vibrational spectra of molecules can be written as high-dimensional eigenvalue problems, i.e., when the eigenvector can be naturally represented as a multidimensional tensor. Tensor methods have proven to be an efficient tool for the approximation of solutions of high-dimensional eigenvalue problems, however, their performance deteriorates quickly when the number of eigenstates to be computed increases. We address this issue by designing a new algorith… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 11 publications
(9 citation statements)
references
References 53 publications
(101 reference statements)
0
9
0
Order By: Relevance
“…, R). An 'optimized' version of this operation combines the matrix-by-vector multiplication and the projection onto the tangent space into a single step P X AZ and exploits the structure of arising operations to decrease complexity to O(dn 2 r x r z R 2 ) (for details of implementation of this operation see section 4.1 of [21]).…”
Section: Stop-gradient and A Wider Class Of Functionalsmentioning
confidence: 99%
“…, R). An 'optimized' version of this operation combines the matrix-by-vector multiplication and the projection onto the tangent space into a single step P X AZ and exploits the structure of arising operations to decrease complexity to O(dn 2 r x r z R 2 ) (for details of implementation of this operation see section 4.1 of [21]).…”
Section: Stop-gradient and A Wider Class Of Functionalsmentioning
confidence: 99%
“…However, the representation of the model in tensor-train format and the fact, that the given loss function ( 4) is quadratic with respect to the tensor α allow for usage of more productive optimization methods. In this work, we suggest using Riemannian optimization, which is a promising tool for learning tensor-based models [Rakhuba et al, 2019, Steinlechner, 2016.…”
Section: Learning Via Riemannian Optimizationmentioning
confidence: 99%
“…In particular, thanks to the multilinear structure of the TT format, it is feasible to minimize globally over a subspace in one of the TT cores. Proceeding in a sweeping manner, one can mimic the Jacob-Davidson method to TT tensors; see [91,92].…”
Section: Eigenvalue Problemsmentioning
confidence: 99%