2016
DOI: 10.1002/nla.2046
|View full text |Cite
|
Sign up to set email alerts
|

New fast divide‐and‐conquer algorithms for the symmetric tridiagonal eigenvalue problem

Abstract: SUMMARYIn this paper, two accelerated divide-and-conquer algorithms are proposed for the symmetric tridiagonal eigenvalue problem, which cost O(N 2 r) flops in the worst case, where N is the dimension of the matrix and r is a modest number depending on the distribution of eigenvalues. Both of these algorithms use hierarchically semiseparable (HSS) matrices to approximate some intermediate eigenvector matrices which are Cauchylike matrices and are off-diagonally low-rank. The difference of these two versions li… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
3
2

Relationship

2
3

Authors

Journals

citations
Cited by 7 publications
(15 citation statements)
references
References 44 publications
(78 reference statements)
0
15
0
Order By: Relevance
“…The main computational task of DC lies in computing the eigenvectors via matrix-matrix multiplications (MMM) (5), which costs O(N 3 ) flops. Since Q is a Cauchy-like matrix and off-diagonally low-rank, MMM (5) can be accelerated by using HSS matrix algorithms, and the computational complexity can be reduced significantly, see [15,12] for more details. The aim of this work is not only to reduce the computation cost of MMM (5) but also its communication cost in the distributed memory environment.…”
Section: Preliminariesmentioning
confidence: 99%
See 2 more Smart Citations
“…The main computational task of DC lies in computing the eigenvectors via matrix-matrix multiplications (MMM) (5), which costs O(N 3 ) flops. Since Q is a Cauchy-like matrix and off-diagonally low-rank, MMM (5) can be accelerated by using HSS matrix algorithms, and the computational complexity can be reduced significantly, see [15,12] for more details. The aim of this work is not only to reduce the computation cost of MMM (5) but also its communication cost in the distributed memory environment.…”
Section: Preliminariesmentioning
confidence: 99%
“…When there are few deflations, the size of matrix Q in (19) will be large, and most of the time spent by DC would correspond to the matrix-matrix multiplication in (19). Furthermore, it is well-known that matrix Q defined as in ( 4) is a Cauchy-like matrix with off-diagonally low rank property, see [2,12]. Therefore, we simply use the parallel structured matrix-matrix multiplication algorithm to compute the eigenvector matrix U in (19).…”
Section: Parallel Structured DC Algorithmmentioning
confidence: 99%
See 1 more Smart Citation
“…Other rank-structured matrices include H-matrix [20,23], H 2 -matrix [22,21], quasiseparable matrices [14,40], and sequentially semiseparable (SSS) [5,6] matrices. We mostly follow the notation used in [35] and [41,27] to introduce HSS.…”
Section: Hss Matrices and Strumpackmentioning
confidence: 99%
“…We expect HSS algorithms to have good performances for problems with large size. Therefore, we only use HSS algorithms when the problem size is large enough, just as in [26,27].…”
Section: Strumpackmentioning
confidence: 99%