2021
DOI: 10.1038/s41467-021-27504-0
|View full text |Cite
|
Sign up to set email alerts
|

SpookyNet: Learning force fields with electronic degrees of freedom and nonlocal effects

Abstract: Machine-learned force fields combine the accuracy of ab initio methods with the efficiency of conventional force fields. However, current machine-learned force fields typically ignore electronic degrees of freedom, such as the total charge or spin state, and assume chemical locality, which is problematic when molecules have inconsistent electronic states, or when nonlocal effects play a significant role. This work introduces SpookyNet, a deep neural network for constructing machine-learned force fields with ex… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
166
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 148 publications
(167 citation statements)
references
References 91 publications
1
166
0
Order By: Relevance
“…Our proposed Equivariant Graph Attention Networks (EQ-GAT) operates in 3D space and implements the message passing for each target node i on its local neighbourhood N (i) as defined in Section 2 to avoid the quadratic complexity of the vanilla self-attention when one target node would interact with all other nodes in the point cloud. We emphasize that the integration of local neighbourhoods manifests as a powerful inductive bias and in a bio-chemistry context, coincides with the assumption that a large part of energy variations can be attributed to local interactions, although the influence and importance of non-local effects in machine-learned force-fields has been recently analyzed in (Unke et al, 2021).…”
Section: Equivariant Graph Attention Networksupporting
confidence: 63%
“…Our proposed Equivariant Graph Attention Networks (EQ-GAT) operates in 3D space and implements the message passing for each target node i on its local neighbourhood N (i) as defined in Section 2 to avoid the quadratic complexity of the vanilla self-attention when one target node would interact with all other nodes in the point cloud. We emphasize that the integration of local neighbourhoods manifests as a powerful inductive bias and in a bio-chemistry context, coincides with the assumption that a large part of energy variations can be attributed to local interactions, although the influence and importance of non-local effects in machine-learned force-fields has been recently analyzed in (Unke et al, 2021).…”
Section: Equivariant Graph Attention Networksupporting
confidence: 63%
“…An example is the inclusion of long-range electrostatic interactions in third and fourthgeneration NNPs. While most fourth-generation NNPs employ predefined descriptors, also novel types of message passing methods are just emerging aiming to describe non-local effects 97,98,126 . Apart from electrostatics, also dispersion interactions, which are weak but can be important in large systems, have been included beyond the local atomic environments in NNPs 25,106 .…”
Section: Discussion and Outlookmentioning
confidence: 99%
“…While explanation methods are often evaluated based on their technical merit (e.g. accuracy, runtime) [63], an increasingly relevant question is whether these explanations enable the human to truly understand the model at hand (also known as causability [35]) and whether these explanations can be turned into meaningful insights and decisions [25], [58], [12], [82]. This contribution will focus on single-instance (local), attribution-based explanations that assign a share of the model output f (x) to the individual features of the respective input sample x.…”
Section: A Brief Review Of Xaimentioning
confidence: 99%
“…This allows for increased confidence in the model results through sanity-checks by an expert as well as new insights into physical phenomena previously not understood (e.g. [69], [38], [82]). XAI methods, especially XAIR methods, can be a key for both which we will demonstrate in the following sections by applying our proposed retraining approach in the quantum chemistry domain.…”
Section: B Explanations In Quantum Chemistrymentioning
confidence: 99%