2019
DOI: 10.1115/1.4045296
|View full text |Cite
|
Sign up to set email alerts
|

A Unifying Framework for Probabilistic Validation Metrics

Abstract: Probabilistic modeling methods are increasingly being employed in engineering applications. These approaches make inferences about the distribution for output quantities of interest. A challenge in applying probabilistic computer models (simulators) is validating output distributions against samples from observational data. An ideal validation metric is one that intuitively provides information on key differences between the simulator output and observational distributions, such as statistical distances/diverg… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
2
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(5 citation statements)
references
References 26 publications
0
2
0
Order By: Relevance
“…M is a measurable space, f is a convex function, F is a class of real-valued bounded measurable functions on M, f is the subset of functions that define the metric, and sup is the supremum: the least upper bound of point-wise difference. 30,32 Another (true) metric that is considered here is the maximum mean discrepancy (MMD), which circumvents the need for density estimation (this is considered beneficial as there is no universally accepted methodology for conducting density estimations, and they can introduce unwanted variability into the metric formulations). The MMD uses a kernel function to obtain the maximum distance between the mean embeddings of two features or vectors that have been mapped into a reproducing kernel Hilbert space (RKHS).…”
Section: Distance Metricsmentioning
confidence: 99%
See 2 more Smart Citations
“…M is a measurable space, f is a convex function, F is a class of real-valued bounded measurable functions on M, f is the subset of functions that define the metric, and sup is the supremum: the least upper bound of point-wise difference. 30,32 Another (true) metric that is considered here is the maximum mean discrepancy (MMD), which circumvents the need for density estimation (this is considered beneficial as there is no universally accepted methodology for conducting density estimations, and they can introduce unwanted variability into the metric formulations). The MMD uses a kernel function to obtain the maximum distance between the mean embeddings of two features or vectors that have been mapped into a reproducing kernel Hilbert space (RKHS).…”
Section: Distance Metricsmentioning
confidence: 99%
“… M is a measurable space, ϕ is a convex function, F is a class of real-valued bounded measurable functions on , f is the subset of functions that define the metric, and sup is the supremum: the least upper bound of point-wise difference. 30,32…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…If the comparison were made on the whole predictive or parameter density functions, i.e. the scenario in which the predictive distribution is compared to observational distribution (often via a finite sample set), one might define a statistical distance (or divergence) measure [15,17], for example, a Hellinger distance, leading to the definition of an α−mirror as a Hellinger-mirror etc.…”
Section: Hybrid Models and Uncertaintymentioning
confidence: 99%
“…Loading is often unknown and unmeasured, and dynamic behaviour during operation needs to be fully captured by a computational model, but is sensitive to small changes in (or disturbances to) the structure. Validation and updating of large complex models bring their own challenges and remain active research areas [4][5][6][7].…”
Section: Introductionmentioning
confidence: 99%