In experiment-based validation, uncertainties and systematic biases in model predictions are reduced by either increasing the amount of experimental evidence available for model calibration-thereby mitigating prediction uncertainty-or increasing the rigor in the definition of physics and/or engineering principles-thereby mitigating prediction bias. Hence, decision makers must regularly choose between either allocating resources for experimentation or further code development. The authors propose a decision-making framework to assist in resource allocation strictly from the perspective of predictive maturity and demonstrate the application of this framework on a nontrivial problem of predicting the plastic deformation of polycrystals.
In partitioned analysis of systems that are driven by the interaction of functionally distinct but strongly coupled constituents, the predictive accuracy of the simulation hinges on the accuracy of individual constituent models. Potential improvement in the predictive accuracy of the simulation that can be gained through improving a constituent model depends not only on the relative importance, but also on the inherent uncertainty and inaccuracy of that particular constituent. A need exists for prioritization of code development efforts to cost‐effectively allocate available resources to the constituents that require improvement the most. This article proposes a novel and quantitative code prioritization index to accomplish such a task and demonstrates its application on a case study of a steel frame with semirigid connections. Findings show that as high‐fidelity constituent models are integrated, the predictive ability of model‐based simulation is improved; however, the rate of improvement is dependent upon the sequence in which the constituents are improved.
Modeling and simulation are being relied upon in many fields of science and engineering as computational surrogates for experimental testing. To justify the use of these simulations for decision making, however, it is critical to determine, and when necessary mitigate, the biases and uncertainties in model predictions, a task that invariably requires validation experiments. To use experimental resources efficiently, validation experiments must be designed to achieve the maximum possible increases in model predictive ability through the calibration of the model against experiments. This need for efficiency is addressed by the concept of optimally designing validation experiments, which constitutes optimizing a predefined criterion while selecting the settings of experiments. This paper presents an improved optimization criterion that incorporates two important factors for the optimal design of validation experiments: (1) how well the model reproduces the validation experiments, and (2) how well the validation experiments cover the domain of applicability. The criterion presented herein selects the appropriate settings for future experiments with the goal of achieving a desired level of predictive ability in the computer model through the use of a minimal number of validation experiments. The criterion explores the entirety of the application domain by including the effect of coverage, and exploits areas of the domain with high variability by including the effect of empirically defined discrepancy bias. The effectiveness of this new criterion is compared with two well-established criteria through a simulated case study involving the stress-strain response and textural evolution of polycrystalline materials. The proposed criterion is demonstrated as efficient at improving the predictive capabilities of the numerical model, particularly when the amount of experimental data available for validation is low.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.