2022
DOI: 10.48550/arxiv.2204.05249
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Learning Local Equivariant Representations for Large-Scale Atomistic Dynamics

Abstract: A simultaneously accurate and computationally efficient parametrization of the energy and atomic forces of molecules and materials is a long-standing goal in the natural sciences. In pursuit of this goal, neural message passing has lead to a paradigm shift by describing many-body correlations of atoms through iteratively passing messages along an atomistic graph. This propagation of information, however, makes parallel computation difficult and limits the length scales that can be studied. Strictly local descr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
34
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 15 publications
(35 citation statements)
references
References 56 publications
(110 reference statements)
1
34
0
Order By: Relevance
“…5) was used for neural scaling experiments to improve convergence and avoid issues with systematic drift in predicted energies, which we identified during the course of this work and plan to address in future work. We use the SchNet [59], PaiNN [36], Allegro [10], and SpookyNet [37] models. Model implementations are from the NeuralForceField repository [34,60,61] and the Allegro repository [10].…”
Section: Methodsmentioning
confidence: 99%
See 4 more Smart Citations
“…5) was used for neural scaling experiments to improve convergence and avoid issues with systematic drift in predicted energies, which we identified during the course of this work and plan to address in future work. We use the SchNet [59], PaiNN [36], Allegro [10], and SpookyNet [37] models. Model implementations are from the NeuralForceField repository [34,60,61] and the Allegro repository [10].…”
Section: Methodsmentioning
confidence: 99%
“…That is, not only do the equivariant GNNs achieve better performance for a given data budget, they achieve increasingly greater performance gains given more training data. This is due to the models' equivariance, which is known to produce greater sample efficiency [9,10], but it is interesting to note that this trend persists to much larger and more chemically diverse datasets than were previously considered, which typically include only 10 2 − 10 3 molecular geometries from a single molecular species.…”
Section: Chemgptmentioning
confidence: 98%
See 3 more Smart Citations