2023
DOI: 10.1021/acs.jctc.2c01203
|View full text |Cite
|
Sign up to set email alerts
|

Data-Efficient Machine Learning Potentials from Transfer Learning of Periodic Correlated Electronic Structure Methods: Liquid Water at AFQMC, CCSD, and CCSD(T) Accuracy

Abstract: Obtaining the atomistic structure and dynamics of disordered condensed-phase systems from first-principles remains one of the forefront challenges of chemical theory. Here we exploit recent advances in periodic electronic structure and provide a dataefficient approach to obtain machine-learned condensed-phase potential energy surfaces using AFQMC, CCSD, and CCSD(T) from a very small number (≤200) of energies by leveraging a transfer learning scheme starting from lower-tier electronic structure methods. We demo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
38
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 29 publications
(43 citation statements)
references
References 64 publications
2
38
0
Order By: Relevance
“…[29][30][31][32] In this work, we propose an alternative approach to ref. [38][39][40][41][42][43][44][45] and investigate the performance of interatomic NN models, which have been trained only on energy labels during the fine-tuning step, on atomic forces.…”
Section: Transfer Learning Interatomic Potentialsmentioning
confidence: 99%
See 4 more Smart Citations
“…[29][30][31][32] In this work, we propose an alternative approach to ref. [38][39][40][41][42][43][44][45] and investigate the performance of interatomic NN models, which have been trained only on energy labels during the fine-tuning step, on atomic forces.…”
Section: Transfer Learning Interatomic Potentialsmentioning
confidence: 99%
“…This behavior is unexpected and has not been observed previously. [38][39][40][41][42][43][44][45] As a possible explanation, we hypothesize that the large number of iterations during pre-training on a large pre-training data set trains the NN to be able to overfit some details of the data more easily, which enables it to overfit the fine-tuning data set more easily. This would suggest that decreasing the number of pre-training epochs when pretraining on very large data sets may allow to circumvent this phenomenon.…”
Section: Molecular Dynamics Trajectoriesmentioning
confidence: 99%
See 3 more Smart Citations