2014
DOI: 10.1137/130945995
|View full text |Cite
|
Sign up to set email alerts
|

An Accelerated Divide-and-Conquer Algorithm for the Bidiagonal SVD Problem

Abstract: In this paper, aiming at solving the bidiagonal SVD problem, a classical divide-andconquer (DC) algorithm is modified, which needs to compute the SVD of broken arrow matrices by solving secular equations. The main cost of DC lies in the updating of singular vectors, which involves two matrix-matrix multiplications. We find that the singular vector matrices of a broken arrow matrix are Cauchy-like matrices and have an off-diagonal low-rank property, so they can be approximated efficiently by hierarchically semi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
26
0

Year Published

2014
2014
2022
2022

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 19 publications
(26 citation statements)
references
References 42 publications
(56 reference statements)
0
26
0
Order By: Relevance
“…Our main improvement was to express more parallelism during the merge step where the quadratic operations become costly as long as the cubic operations are well parallelized. A recent study by Li [24] showed that the matrix products could be improved with the use of hierarchical matrices, reducing the cubic part with the same accuracy. Combining both solutions should provide a fast and accurate solution, while reducing the memory space required.…”
Section: Discussionmentioning
confidence: 99%
“…Our main improvement was to express more parallelism during the merge step where the quadratic operations become costly as long as the cubic operations are well parallelized. A recent study by Li [24] showed that the matrix products could be improved with the use of hierarchical matrices, reducing the cubic part with the same accuracy. Combining both solutions should provide a fast and accurate solution, while reducing the memory space required.…”
Section: Discussionmentioning
confidence: 99%
“…We expect HSS algorithms to have good performances for problems with large size. Therefore, we only use HSS algorithms when the problem size is large enough, just as in [26,27].…”
Section: Strumpackmentioning
confidence: 99%
“…Recently, the authors [27] used the hierarchically semiseparable (HSS) matrices [8] to accelerate the tridiagonal DC in LAPACK, and obtained about 6x speedups in comparison with that in LAPACK for some large matrices on a shared memory multicore platform. The bidiagonal and banded DC algorithms for the SVD problem are accelerated similarly [26,28]. The main point is that some intermediate eigenvector matrices are rank-structured matrices [8,20].…”
Section: Introductionmentioning
confidence: 99%
“…It is easy to check that Q is also off‐diagonally low‐rank. To take advantage of these two properties, we can use an HSS matrix to approximate Q and then use the fast HSS matrix multiplication algorithm to update the eigenvectors, like the bidiagonal SVD case . A structured low‐rank approximation method is designed for a Cauchy‐like matrix in , called structured rank‐revealing Schur‐complement factorization (SRRSC), which can be used to construct HSS matrices efficiently.…”
Section: Introductionmentioning
confidence: 99%