2022
DOI: 10.1039/d2sc04056e
|View full text |Cite
|
Sign up to set email alerts
|

Uncertainty quantification for predictions of atomistic neural networks

Abstract: The value of uncertainty quantification on predictions for trained neural networks (NNs) on quantum chemical reference data is quantitatively explored. For this, the architecture of the PhysNet NN was suitably...

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
11
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6

Relationship

1
5

Authors

Journals

citations
Cited by 11 publications
(11 citation statements)
references
References 74 publications
0
11
0
Order By: Relevance
“…Understanding and accurately assessing the uncertainty arising from both sources is a key piece of information required within the ML workflow if they are to become widely adopted by research communities, especially in supporting non-experts users to rapidly assess the reliability of their data. However, accurate uncertainty quantification also provides a strategy for targeted approaches for growing a given training set, based upon unsatisfactorily high uncertainties, 6 especially important when data is time-consuming or expensive to acquire. The importance and aforementioned benefits of obtaining an accurate quantification of uncertainty has led to a significant research effort in the field and the development of several techniques including; Monte Carlo dropout, 7 deep ensembles, 8 bootstrap resampling 9 and Bayesian neural networks.…”
mentioning
confidence: 99%
“…Understanding and accurately assessing the uncertainty arising from both sources is a key piece of information required within the ML workflow if they are to become widely adopted by research communities, especially in supporting non-experts users to rapidly assess the reliability of their data. However, accurate uncertainty quantification also provides a strategy for targeted approaches for growing a given training set, based upon unsatisfactorily high uncertainties, 6 especially important when data is time-consuming or expensive to acquire. The importance and aforementioned benefits of obtaining an accurate quantification of uncertainty has led to a significant research effort in the field and the development of several techniques including; Monte Carlo dropout, 7 deep ensembles, 8 bootstrap resampling 9 and Bayesian neural networks.…”
mentioning
confidence: 99%
“…Different loss functions for fitting NNs can be used as well. 76 In general, the loss function is highly nonlinear and is minimized iteratively by a gradient descent algorithm which, preferably, can find the best solution despite potentially many local minima. 56 For PES fitting, convergence behaviour and accuracy can be improved by including additional information such as atomic forces or dipole moments (or other properties of the system) in the loss function.…”
Section: Theoretical Backgroundmentioning
confidence: 99%
“… 162 As a solution to this bottleneck, methods that obtain the uncertainty in a single evaluation have been proposed. Some us 76 recently introduced a modification of the PhysNet architecture that allows the calculation of the uncertainty on the prediction through a method called deep evidential regression. 163 Using this method, the energy distribution of the system is represented with a Gaussian and its uncertainty as a gamma distribution.…”
Section: Construction Of Pessmentioning
confidence: 99%
“…While deep learning methods have seen rapid development in the past decade, they cannot be applied for relatively small data sets that are common in materials science and other fields. For such data sets, traditional ML methods such as support vector regression, random forest and XGboost [1][2][3] must be used. Despite their success, ML methods have several disadvantages such lack of interpretability and inability to accurately estimate the reliability of ML predictions, in contrast to methods based on fundamental scientific principles (e.g.…”
Section: Introductionmentioning
confidence: 99%
“…This has motivated a research effort focusing on uncertainty quantification (UQ) for machine learning models [1][2][3][4][5][6][7][8][9][10][11][12][13]. Among the methods used for UQ, approaches that use similarity evaluation based on the evaluation of the distances between the point to be predicted and the points in the training set are advantageous due to their low computational cost and simplicity of interpretation.…”
Section: Introductionmentioning
confidence: 99%