To use predictive models in engineering design of physical systems, one should first quantify the model uncertainty via model updating techniques employing both simulation and experimental data. While calibration is often used to tune unknown calibration parameters of a computer model, the addition of a discrepancy function has been used to capture model discrepancy due to underlying missing physics, numerical approximations, and other inaccuracies of the computer model that would exist even if all calibration parameters are known. One of the main challenges in model updating is the difficulty in distinguishing between the effects of calibration parameters versus model discrepancy. We illustrate this identifiability problem with several examples, explain the mechanisms behind it, and attempt to shed light on when a system may or may not be identifiable. In some instances, identifiability is achievable under mild assumptions, whereas in other instances, it is virtually impossible. In a companion paper, we demonstrate that using multiple responses, each of which depends on a common set of calibration parameters, can substantially enhance identifiability.
Model validation metrics have been developed to provide a quantitative measure that characterizes the agreement between predictions and observations. In engineering design, the metrics become useful for model selection when alternative models are being considered. Additionally, the predictive capability of a computational model needs to be assessed before it is used in engineering analysis and design. Due to the various sources of uncertainties in both computer simulations and physical experiments, model validation must be conducted based on stochastic characteristics. Currently there is no unified validation metric that is widely accepted. In this paper, we present a classification of validation metrics based on their key characteristics along with a discussion of the desired features. Focusing on stochastic validation with the consideration of uncertainty in both predictions and physical experiments, four main types of metrics, namely classical hypothesis testing, Bayes factor, frequentist’s metric, and area metric, are examined to provide a better understanding of the pros and cons of each. Using mathematical examples, a set of numerical studies are designed to answer various research questions and study how sensitive these metrics are with respect to the experimental data size, the uncertainty from measurement error, and the uncertainty in unknown model parameters. The insight gained from this work provides useful guidelines for choosing the appropriate validation metric in engineering applications.
The use of complex computer simulations to design, improve, optimize, or simply to better understand complex systems in many fields of science and engineering is now ubiquitous. However, simulation models are never a perfect representation of physical reality. Two general sources of uncertainty that account for the differences between simulations and experiments are parameter uncertainty and model uncertainty. The former derives from unknown model parameters, while the latter is caused by underlying missing physics, numerical approximations, and other inaccuracies of the computer simulation that exist even if all of the parameters are known. To obtain knowledge of these two sources of uncertainty, data from computer simulations (usually abundant) and data from physical experiments (typically more limited) are often combined using statistical methods. Statistical adjustment of the computer simulation model to account for the two sources of uncertainty is referred to as calibration. We argue that calibration as it is typically implemented, using only a single response variable, is challenging in that it is often extremely difficult to distinguish between the effects of parameter and model uncertainty. However, many different responses (distinct responses and/or the same response measured at different spatial and temporal locations) are automatically calculated in simulations. As multiple responses generally share a mutual dependence on the unknown parameters, they provide valuable information that can improve identifiability of parameter and model uncertainty in calibration, if they are also measured experimentally. In this paper, we explore the use of multiple responses for calibration.
In physics-based engineering modeling, the two primary sources of model uncertainty, which account for the differences between computer models and physical experiments, are parameter uncertainty and model discrepancy. Distinguishing the effects of the two sources of uncertainty can be challenging. For situations in which identifiability cannot be achieved using only a single response, we propose to improve identifiability by using multiple responses that share a mutual dependence on a common set of calibration parameters. To that end, we extend the single response modular Bayesian approach for calculating posterior distributions of the calibration parameters and the discrepancy function to multiple responses. Using an engineering example, we demonstrate that including multiple responses can improve identifiability (as measured by posterior standard deviations) by an amount that ranges from minimal to substantial, depending on the characteristics of the specific responses that are combined.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.