2022
DOI: 10.1063/5.0106617
|View full text |Cite
|
Sign up to set email alerts
|

GPUMD: A package for constructing accurate machine-learned potentials and performing highly efficient atomistic simulations

Abstract: We present our latest advancements of machine-learned potentials (MLPs) based on the neuroevolution potential (NEP) framework introduced in [Fan et al., Phys. Rev. B 104, 104309 (2021)] and their implementation in the open-source package GPUMD.We increase the accuracy of NEP models both by improving the radial functions in the atomic-environment descriptor using a linear combination of Chebyshev basis functions and by extending the angular descriptor with some four-body and five-body contributions as in the at… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
79
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 74 publications
(79 citation statements)
references
References 146 publications
(232 reference statements)
0
79
0
Order By: Relevance
“…However, thermal transport usually involves large length and long timescales and GAP18 is not currently efficient enough for this purpose. The NEP model as implemented in the GPUMD package [27,28], on the other hand, can reach a computational speed of about 5 × 10 6 atom step per second for a-Si by using a single GPU card such as Tesla V100, which is about three orders of magnitude faster than GAP18 using 72 Xeon-Gold 6240 central processing unit (CPU) cores [18].…”
Section: Training a Machine Learned Potential For A-simentioning
confidence: 99%
“…However, thermal transport usually involves large length and long timescales and GAP18 is not currently efficient enough for this purpose. The NEP model as implemented in the GPUMD package [27,28], on the other hand, can reach a computational speed of about 5 × 10 6 atom step per second for a-Si by using a single GPU card such as Tesla V100, which is about three orders of magnitude faster than GAP18 using 72 Xeon-Gold 6240 central processing unit (CPU) cores [18].…”
Section: Training a Machine Learned Potential For A-simentioning
confidence: 99%
“…U 0 (n), F 0 i and W 0 mn (n) are the corresponding quantities calculated from first-principles calculations based on DFT, respectively, and are also considered as target values for machine learning. In this study, NEP generation and related machine learning simulations (phonon dispersion, mechanical behavior and lattice thermal conductivity) are implemented in the open-source GPUMD-3.3.1 package 45,46 The training data are prepared using the first-principles calculations program in the QUANTUM ESPRESSO 6.8 package. 42 AIMD within the NVT ensemble is applied to produce atomic displacement, which is performed in a supercell containing 40 atoms.…”
Section: The Nep Obtained From Machine Learningmentioning
confidence: 99%
“…The NEP still needs to be further developed for complex systems. 45 So far, the HNEMD method has been widely used to investigate the heat transport properties of materials, in which the heat flow circulates under a driving force. To eliminate the size effect in the thermal conductivity simulations, a big rectangular supercell is established, as shown in Fig.…”
Section: Prediction Of Lattice Thermal Conductivity Using the Mechani...mentioning
confidence: 99%
See 1 more Smart Citation
“…MLIPs are commonly trained using DFT-based datasets, and their performance with respect to flexibility and accuracy 15 is close to that of the DFT method, but with substantially accelerated computational time due to bypassing the expensive and also unnecessary electronic structure calculations. MLIPs nowadays can be directly employed to conduct molecular dynamics simulations over highly computationally efficient massively parallel processors and graphics processing units, using various packages, such as LAMMPS, 16 TorchMD, 17 GPUMD 18 and MLatom. 17…”
Section: Introductionmentioning
confidence: 99%