1975
DOI: 10.1016/0021-9991(75)90065-0
|View full text |Cite
|
Sign up to set email alerts
|

The iterative calculation of a few of the lowest eigenvalues and corresponding eigenvectors of large real-symmetric matrices

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
810
0
1

Year Published

1995
1995
2016
2016

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 2,298 publications
(814 citation statements)
references
References 9 publications
3
810
0
1
Order By: Relevance
“…The working equations may then be solved using non-Hermitian variants of the Davidson algorithm. [23][24][25][26] When excitation energies are the subject of interest, the EOM-CC equation is equivalent to the Jacobian in the LR-CC formalism 12 for excitation energy calculations. The mathematical difference between LR and EOM formalisms arises when they are used to compute excitation properties, such as transition dipole and oscillator strengths.…”
Section: Theorymentioning
confidence: 99%
“…The working equations may then be solved using non-Hermitian variants of the Davidson algorithm. [23][24][25][26] When excitation energies are the subject of interest, the EOM-CC equation is equivalent to the Jacobian in the LR-CC formalism 12 for excitation energy calculations. The mathematical difference between LR and EOM formalisms arises when they are used to compute excitation properties, such as transition dipole and oscillator strengths.…”
Section: Theorymentioning
confidence: 99%
“…Since solving for a subset of eigenvalues of large matrices is a standard problem in computational science, a variety of different algorithms exist that vary in convergence properties and memory requirements [51]. Among the most popular in electronic structure theory are the Davidson [52] and Conjugate Gradients algorithms [53], which are based on minimising the Rayleigh quotient…”
Section: Iterative Eigensolversmentioning
confidence: 99%
“…For each iteration the step to the minimum at the boundary of the trust-region ensures that we are moving downhill. When solving the level-shifted Newton equations, explicit computation and storage of the Hessian is avoided by solving the equations using iterative algorithms 107 where only linear transformations of the Hessian on trial vectors are required. The trust-region method was introduced by Fletcher 106 and was adapted by Høyvik and Jørgensen to include a line search step to make it an efficient and reliable algorithm for optimizing localization functions for both the occupied and the virtual orbitals, for large molecular systems containing diffuse basis functions, 108 and also when large negative Hessian eigenvalues are encountered in the initial iterations.…”
Section: Localization Function Optimizationmentioning
confidence: 99%