2003
DOI: 10.1016/s0021-9991(03)00069-x
|View full text |Cite
|
Sign up to set email alerts
|

Massively parallel linear-scaling algorithm in an ab initio local-orbital total-energy method

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
12
0

Year Published

2007
2007
2012
2012

Publication Types

Select...
6

Relationship

1
5

Authors

Journals

citations
Cited by 12 publications
(12 citation statements)
references
References 30 publications
0
12
0
Order By: Relevance
“…There are numerous implementations of these methods: the original papers [163,165,166,238]; the generalised versions [164,199,240,241]; in parallel [242,243]; with ultra-soft pseudopotentials [36]. A real-space implementation of similar ideas [202] uses exact inversion of the overlap, thus not forming a strict linear scaling method (though the cubic scaling part will have a small prefactor).…”
Section: Direct and Iterative Approachesmentioning
confidence: 99%
See 1 more Smart Citation
“…There are numerous implementations of these methods: the original papers [163,165,166,238]; the generalised versions [164,199,240,241]; in parallel [242,243]; with ultra-soft pseudopotentials [36]. A real-space implementation of similar ideas [202] uses exact inversion of the overlap, thus not forming a strict linear scaling method (though the cubic scaling part will have a small prefactor).…”
Section: Direct and Iterative Approachesmentioning
confidence: 99%
“…Scaling on up to 512 processors and 85,000 atoms was demonstrated, and an extensive analysis of scaling was made, noting that as the volume assigned to each processor decreases relative to a boundary area (due to localisation radii) the amount of communication will change from depending on the number of processors (as N −1/3 proc ) to depending on the volume of the boundary. The same approach has been used for an implementation of orbital minimisation [164] within an ab initio tight binding method [243], though MPI and OpenMP parallelisation are shared; the resulting code was demonstrated on up 1,024 processors and 6,000 atoms.…”
Section: Parallelisation and Sparse Matricesmentioning
confidence: 99%
“…Because these integral tables depend only on the atom type, their r c values, and the type of DFT exchange-correlation functional used, the integral tables only need to be generated once, for a given number of atomic species, rather than calculating integrals ''on-the-fly'' during an MD simulation. This pre-generation process lends itself to parallelization via spreading the integrals out over multiple processors based on integral types [23,30]. This parallelization is particularly important, since the number of integrals needed for each database grows as order N 3 with the number of atom types in the database.…”
Section: Localized Pseudo-atomic Orbitals and Basis Setsmentioning
confidence: 99%
“…For large systems such as the 10-mer DNA (644 atoms), we have implemented a variational linear-scaling technique to solve for the total energies and forces from the sparse Hamiltonian and overlap matrices [31]. Furthermore, we have developed a massively-parallel algorithm using message passing interface (MPI) to manipulate extremely large sparse matrices required for linear-scaling algorithms and have exhibited simulations of up to 6000 atoms [30]. The use of local-orbitals in the FIREBALL method yields a very sparse Hamiltonian matrix, which facilitates using a linearscaling algorithm to obtain the electronic band-structure energy.…”
Section: Amorphous Chalcogenidesmentioning
confidence: 99%
See 1 more Smart Citation