2017
DOI: 10.1038/s41598-017-01251-z
|View full text |Cite
|
Sign up to set email alerts
|

Discovering charge density functionals and structure-property relationships with PROPhet: A general framework for coupling machine learning and first-principles methods

Abstract: Modern ab initio methods have rapidly increased our understanding of solid state materials properties, chemical reactions, and the quantum interactions between atoms. However, poor scaling often renders direct ab initio calculations intractable for large or complex systems. There are two obvious avenues through which to remedy this problem: (i) develop new, less expensive methods to calculate system properties, or (ii) make existing methods faster. This paper describes an open source framework designed to purs… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
98
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 116 publications
(99 citation statements)
references
References 45 publications
1
98
0
Order By: Relevance
“…Roughly two classes of properties can be predicted, or classified, using machine learning methods: bandgaps and electronic conductivity. The former being widely explored by regression techniques, capable of presenting a numerical value for the gap [206,210,253,264,[452][453][454][455][456][457][458][459][460][461][462], or classification methods, which simply provide an answer to the question 'is this compound or material a metal?' [463].…”
Section: Electronic Propertiesmentioning
confidence: 99%
See 1 more Smart Citation
“…Roughly two classes of properties can be predicted, or classified, using machine learning methods: bandgaps and electronic conductivity. The former being widely explored by regression techniques, capable of presenting a numerical value for the gap [206,210,253,264,[452][453][454][455][456][457][458][459][460][461][462], or classification methods, which simply provide an answer to the question 'is this compound or material a metal?' [463].…”
Section: Electronic Propertiesmentioning
confidence: 99%
“…Among those, many authors point out the requirement that descriptors should be invariant with respect to translations and rotations of atomic positions, as well as reordering of atomic indices. Popular descriptors with these properties can be categorized into few cases: structural data such as Coulomb matrices [248,457,470], molecular strings or graphs [264,455,457], and polymer fingerprinting [457][458][459]; simple atomic properties of the constituent species [460,462,466], and DFT-derived data, such as PBE/LDAlevel bandgaps and hybrid-level electronic density [206,272,452,456,461,471,472]. Frequently a combination of two or more classes of descriptors [453,457,460,473] as well as experimental data as features [272,465] is found in the literature.…”
Section: Electronic Propertiesmentioning
confidence: 99%
“…Besides general purpose ML tools such as scikit‐learn, tensorflow, and Pytorch, there has been an explosion of customized open‐source ML software libraries for materials science. A nonexhaustive list includes AutoMatminer, PROPhet for general materials ML; amp, ænet, and ANI for developing neural network potentials; CGCNN, MEGNet, and SchnetPack are graph‐based deep learning model packages for accurate crystal and/or molecule property modeling.…”
Section: Model Selection and Trainingmentioning
confidence: 99%
“…The per-atom neural network potentials were similar to a standard scheme shown to be effective in recent literature 16 and implemented recently in an open-source software package, 17 one of several implementations in the literature. 18,19 However, rather than predict the electronic energy reported by the DFT code as is standard for these methods, the adsorption energy was instead chosen as a target, which had several advantages. First, the adsorption energy is a small well-normalized energy, usually ranging from -3 eV to +1 eV, so that energy normalization was not a problem.…”
mentioning
confidence: 99%