2022
DOI: 10.48550/arxiv.2201.07752
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A lower scaling four-component relativistic coupled cluster method based on natural spinors

Somesh Chamoli,
Kshitijkumar Surjuse,
Malaya K. Nayak
et al.

Abstract: We present the theory, implementation, and benchmark results for a frozen natural spinors-based lower scaling four-component relativistic coupled cluster method. The natural spinors are obtained by diagonalizing the one-body reduced density matrix from a relativistic MP2 calculation based on four-component Dirac-Coulomb Hamiltonian. The correlation energy in the coupled cluster method converges more rapidly with respect to the size of the virtual space in the frozen natural spinor basis than that observed in t… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 36 publications
0
1
0
Order By: Relevance
“…We note that upon completing this manuscript, we have become aware of another implementation of the MP2FNOs approach for relativistic correlated methods, in the BAGH code 49 . While the main features of the MP2FNOs method are the same in both implementations we first note that our implementation fully exploits ExaTENSOR's single-node or distributed memory (multi-node) and GPU acceleration capabilities, and as such can be efficiently employed in systems ranging from local clusters to latest-generation supercomputers.…”
Section: Introductionmentioning
confidence: 99%
“…We note that upon completing this manuscript, we have become aware of another implementation of the MP2FNOs approach for relativistic correlated methods, in the BAGH code 49 . While the main features of the MP2FNOs method are the same in both implementations we first note that our implementation fully exploits ExaTENSOR's single-node or distributed memory (multi-node) and GPU acceleration capabilities, and as such can be efficiently employed in systems ranging from local clusters to latest-generation supercomputers.…”
Section: Introductionmentioning
confidence: 99%