2022
DOI: 10.3390/computation10020027
|View full text |Cite
|
Sign up to set email alerts
|

Should We Gain Confidence from the Similarity of Results between Methods?

Abstract: Confirming the result of a calculation by a calculation with a different method is often seen as a validity check. However, when the methods considered are all subject to the same (systematic) errors, this practice fails. Using a statistical approach, we define measures for reliability and similarity, and we explore the extent to which the similarity of results can help improve our judgment of the validity of data. This method is illustrated on synthetic data and applied to two benchmark datasets extracted fro… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 18 publications
0
2
0
Order By: Relevance
“…In cases where the reference value is uncertain, good mutual agreement between predictions from different ''high-level'' methods can be treated as a support of their reliability (unless these methods are subject to the same systematic error 95 ). However, in the context of TM spin-state energetics, the application of methods assumed by different authors to be ''reliable'' can lead in some cases to strikingly divergent results (Table 1).…”
Section: Reference Values From Theorymentioning
confidence: 99%
“…In cases where the reference value is uncertain, good mutual agreement between predictions from different ''high-level'' methods can be treated as a support of their reliability (unless these methods are subject to the same systematic error 95 ). However, in the context of TM spin-state energetics, the application of methods assumed by different authors to be ''reliable'' can lead in some cases to strikingly divergent results (Table 1).…”
Section: Reference Values From Theorymentioning
confidence: 99%
“…The observed confidence curve is often compared with what one would get for an "oracle", which represents the quite unrealistic scenario that the ranking of the errors and uncertainties are perfectly correlated (corresponding to ρ rank = 1), meaning that the uncertainty predictor is actually an error predictor. Here we will focus on ρ rank to represent the ranking-based metrics but refer the reader to work by Pernot on the use of confidence curves for UQ validation, which was published while preparing this manuscript [18]. Similarly to how we propose the simulated ρ rank as a reference to Spearman's rank correlation coefficient, Pernot suggests changing the reference confidence curve from an "oracle" to a probabilistic one based on errors sampled from the predicted uncertainties assuming normally distributed errors (just like we do for ρ sim rank ).…”
Section: Evaluation Metricsmentioning
confidence: 99%
“…We can notice the large panel of scientific topics covered by Karlheinz's knowledge. We deeply acknowledge the following contributions related to spectroscopy by Manuel Yañez et al [12], Juan-Carlos Sancho-García and Emilio San-Fabián [13]; excited states by Ágnes Nagy [14], Kalidas Sen et al [15] and Fabrizia Negri et al [16]; DFT developments by Fabio Della Sala et al [17], Mathias Rapacioli and Nathalie Tarrat [18], Emmanuel Fromager et al [19], José Manuel García de la Vega et al [20] and Harry Ramanantoanina [21]; results analysis by Andreas Savin et al [22] and Manuel Richter et al [23]; and, of course, the solid state and surfaces by Leila Kalantari and Fabien Tran et al [24], Denis Salahub et al [25], Peter Blaha et al [26], Samuel B. Trickey [27], William Lafargue-Dit-Hauret and Xavier Rocquefelte [28], Tzonka Mineva and Hazar Guesmi et al [29]. (H.C.)…”
mentioning
confidence: 99%