1992
DOI: 10.1137/0613066
|View full text |Cite
|
Sign up to set email alerts
|

Estimating the Largest Eigenvalue by the Power and Lanczos Algorithms with a Random Start

Abstract: Our problem is to compute an approximation to the largest eigenvalue of an n x n large symmetric positive definite matrix with relative error at most c. We consider only algorithms that use Krylov inforrpation [b, Ab,. .. , Akb] consisting of k matrix-vector multiplications for some unit vector b. If the vector b is chosen deterministically then the problem cannot be solved no matter how many matrix-vector multiplications are performed and what algorithm is used. If, however, the vector b is chosen randomly wi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

5
265
1

Year Published

1998
1998
2020
2020

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 230 publications
(271 citation statements)
references
References 12 publications
(6 reference statements)
5
265
1
Order By: Relevance
“…A thin eigenvalue decomposition of a large matrix is usually computed with an iterative method like the Lanczos algorithm [24]. Its computational complexity depends on the number of iterations needed during the iterative procedure which again depends on the gap between the eigenvalues of the matrix [25]. Though no computational complexity can be given for spectral clustering in general, it takes at least as long as k-means discussed in the "K-means clustering" section-since k-means is the last step in the algorithm-but should be much higher in practice due to the eigenvalue decomposition.…”
Section: Spectral Clusteringmentioning
confidence: 99%
“…A thin eigenvalue decomposition of a large matrix is usually computed with an iterative method like the Lanczos algorithm [24]. Its computational complexity depends on the number of iterations needed during the iterative procedure which again depends on the gap between the eigenvalues of the matrix [25]. Though no computational complexity can be given for spectral clustering in general, it takes at least as long as k-means discussed in the "K-means clustering" section-since k-means is the last step in the algorithm-but should be much higher in practice due to the eigenvalue decomposition.…”
Section: Spectral Clusteringmentioning
confidence: 99%
“…The low rank factorization of X i is obtained in the algorithm via Furthermore, it is known that the function ApproxEV(M, ε) that computes an approximate eigenvector with guarantee ε can be implemented using the Lanczos method that runs in timeÕ N √ L √ ε and returns a valid approximation with high probability, where N is the number of non-zero entries in the matrix M ∈ R n×n and L is a bound on the largest eigenvalue of M , see [11]. Hence, we can conclude with the following corollary.…”
Section: Algorithm 1 Approxsdpmentioning
confidence: 99%
“…In the Lanczos method, each iteration can be implemented in one pass over A, whereas our algorithm requires two passes over A in each iteration. Kuczyński and Woźniakowski [27] prove that the Lanczos method, with a randomly chosen starting vector, outputs a vector…”
Section: Related Workmentioning
confidence: 99%