2022
DOI: 10.1088/2632-2153/ac9955
|View full text |Cite
|
Sign up to set email alerts
|

How robust are modern graph neural network potentials in long and hot molecular dynamics simulations?

Abstract: Graph neural networks (GNNs) have emerged as a powerful machine learning approach for the prediction of molecular properties. In particular, recently proposed advanced GNN models promise quantum chemical accuracy at a fraction of the computational cost. While the capabilities of such advanced GNNs have been extensively demonstrated on benchmark datasets, there have been few applications in real atomistic simulations. Here, we therefore put the robustness of GNN interatomic potentials to the test, using the rec… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

1
33
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7
3

Relationship

3
7

Authors

Journals

citations
Cited by 47 publications
(44 citation statements)
references
References 43 publications
1
33
0
Order By: Relevance
“…The GNN* model with a radius of 0.6 nm and a scaling parameter of 0.1 yielded the most stable simulation results as indicated by the smallest deviations between the different random seeds for the closed salt-bridge conformations (see Figures S1-S4 in the Supporting Information). The observation that models with similar performance on a retrospective test set show more different behaviour in prospective simulations is in line with findings by Fu et al [37] and Stocker et al [38] Based on these results, the following analyses were performed using only the GNN* with a radius of 0.6 nm and a scaling parameter of 0.1.…”
Section: Prospective Molecular Dynamics Simulationssupporting
confidence: 84%
“…The GNN* model with a radius of 0.6 nm and a scaling parameter of 0.1 yielded the most stable simulation results as indicated by the smallest deviations between the different random seeds for the closed salt-bridge conformations (see Figures S1-S4 in the Supporting Information). The observation that models with similar performance on a retrospective test set show more different behaviour in prospective simulations is in line with findings by Fu et al [37] and Stocker et al [38] Based on these results, the following analyses were performed using only the GNN* with a radius of 0.6 nm and a scaling parameter of 0.1.…”
Section: Prospective Molecular Dynamics Simulationssupporting
confidence: 84%
“…An important general issue in ML-methods is the determination of evaluation metrics , for the quantification of a successful training. The behavior of the loss function alone seems not to be a reliable criterion, as in some cases an equilibrated loss is not an indicator of physically meaningful results. , Stocker et al, for example, report MD atomistic simulations of small organic molecules with GemNet potentials based on QM data. Even though a very low test set loss on the forces was achieved during training, the MLP produced many unphysical configurations during the simulations.…”
Section: Open Challenges and Future Outlookmentioning
confidence: 99%
“…Evaluation of models trained using various force strategies on hold sets using a fixed force aggregation strategy (Table S3) demonstrates that optimized-force models result in lower force residuals; however, we note that the success in force prediction and accurate free energy surfaces often have a complex relationship. [75][76][77] Similar to the case of the rigid water dimer, these results strongly suggest that training using invalid slice force mappings introduces artifacts. These errors appear to be resolved by using maps that satisfy the requirements outlined above.…”
Section: Optimized Forces Improve Protein Modelsmentioning
confidence: 83%