2022
DOI: 10.1007/s10543-021-00907-7
|View full text |Cite
|
Sign up to set email alerts
|

A rank-adaptive robust integrator for dynamical low-rank approximation

Abstract: A rank-adaptive integrator for the dynamical low-rank approximation of matrix and tensor differential equations is presented. The fixed-rank integrator recently proposed by two of the authors is extended to allow for an adaptive choice of the rank, using subspaces that are generated by the integrator itself. The integrator first updates the evolving bases and then does a Galerkin step in the subspace generated by both the new and old bases, which is followed by rank truncation to a given tolerance. It is shown… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
63
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

3
4

Authors

Journals

citations
Cited by 38 publications
(64 citation statements)
references
References 59 publications
(104 reference statements)
0
63
0
Order By: Relevance
“…Instead of evolving individual low-rank factors in time, these methods evolve products of low-rank factors, which yields remarkable stability and exactness properties [27], both in the matrix and the tensor settings [31,40,39,9]. In this work, we employ the "unconventional" basis update & Galerkin step integrator [7] as well as its rank-adaptive extension [5], see also [32,8]. The rank-adaptive unconventional integrator chooses the approximation ranks according to the continuous-time training dynamics and allows us to find highly-performing low-rank subnetworks directly during the training phase, while requiring reduced training cost and memory storage.…”
Section: Related Work On Low-rank Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…Instead of evolving individual low-rank factors in time, these methods evolve products of low-rank factors, which yields remarkable stability and exactness properties [27], both in the matrix and the tensor settings [31,40,39,9]. In this work, we employ the "unconventional" basis update & Galerkin step integrator [7] as well as its rank-adaptive extension [5], see also [32,8]. The rank-adaptive unconventional integrator chooses the approximation ranks according to the continuous-time training dynamics and allows us to find highly-performing low-rank subnetworks directly during the training phase, while requiring reduced training cost and memory storage.…”
Section: Related Work On Low-rank Methodsmentioning
confidence: 99%
“…Its solution space contains all elements of the layer's weight matrix. Thus, rather than a discrete update of the weights, we interpret the training phase as a continuous evolution of the weights according to (5), as illustrated in Fig. 1(a-b).…”
Section: Low-rank Training Via Gradient Flowmentioning
confidence: 99%
See 2 more Smart Citations
“…Furthermore, the unconventional integrator inherits the exactness and robustness properties of the classical matrix projector-splitting integrator, see [40]. Additionally, it allows for an efficient use of rank adaptivity [44].…”
Section: Unconventional Integrator For Dynamical Low-rank Approximationmentioning
confidence: 99%