Designing and optimizing complex systems often requires numerous evaluations of a quantity of interest. This is typically achieved by querying potentially expensive numerical models in an optimization process. To alleviate the cost of optimization, surrogate models can be used in lieu of the original model, as they are cheaper to evaluate. In addition, di↵erent information sources with varying fidelity, such as numerical models, experimental results or historical data may be available to estimate the quantity of interest. This work proposes a strategy to adaptively construct and exploit a multifidelity surrogate when multiple information sources of varying fidelity are available. One of the distinguishing features of the proposed approach is the relaxation of the common assumption of hierarchical relationships among information sources. This is achieved by endowing the surrogate representation with uncertainty functions that can vary across the design space; this uncertainty quantifies the fidelity of the underlying information source. The resulting multifidelity surrogate is used in an optimization setting to identify the next design to evaluate, as well as to select the information sources with which to perform the evaluation, based on information source evaluation cost and fidelity. For an aerodynamic design example, the proposed strategy leverages multifidelity information to reduce the number of evaluations of the expensive information source needed during the optimization.
A multifidelity approach to design and analysis for complex systems seeks to exploit optimally all available models and data. Existing multifidelity approaches generally attempt to calibrate low-fidelity models or replace low-fidelity analysis results using data from higher fidelity analyses. This paper proposes a fundamentally different approach that uses the tools of estimation theory to fuse together information from multifidelity analyses, resulting in a Bayesian
SUMMARYTo support effective decision making, engineers should comprehend and manage various uncertainties throughout the design process. Unfortunately, in today's modern systems, uncertainty analysis can become cumbersome and computationally intractable for one individual or group to manage. This is particularly true for systems comprised of a large number of components. In many cases, these components may be developed by different groups and even run on different computational platforms. This paper proposes an approach for decomposing the uncertainty analysis task among the various components comprising a feed-forward system and synthesizing the local uncertainty analyses into a system uncertainty analysis. Our proposed decomposition-based multicomponent uncertainty analysis approach is shown to be provably convergent in distribution under certain conditions. The proposed method is illustrated on quantification of uncertainty for a multidisciplinary gas turbine system and is compared to a traditional system-level Monte Carlo uncertainty analysis approach. Copyright © 2014 John Wiley & Sons, Ltd.
Calculation of phase diagrams is one of the fundamental tools in alloy design-more specifically under the framework of Integrated Computational Materials Engineering. Uncertainty quantification of phase diagrams is the first step required to provide confidence for decision making in property-or performance-based design. As a manner of illustration, a thorough probabilistic assessment of the CALPHAD model parameters is performed against the available data for a Hf-Si binary case study using a Markov Chain Monte Carlo sampling approach. The plausible optimum values and uncertainties of the parameters are thus obtained, which can be propagated to the resulting phase diagram. Using the parameter values obtained from deterministic optimization in a computational thermodynamic assessment tool (in this case Thermo-Calc) as the prior information for the parameter values and ranges in the sampling process is often necessary to achieve a reasonable cost for uncertainty quantification. This brings up the problem of finding an appropriate CALPHAD model with high-level of confidence which is a very hard and costly task that requires considerable expert skill. A Bayesian hypothesis testing based on Bayes' factors is proposed to fulfill the need of model selection in this case, which is applied to compare four recommended models for the Hf-Si system. However, it is demonstrated that information fusion approaches, i.e., Bayesian model averaging and an error correlation-based model fusion, can be used to combine the useful information existing in all the given models rather than just using the best selected model, which may lack some information about the system being modelled.
Nonlinear state-space models are ubiquitous in modeling real-world dynamical systems. Sequential Monte Carlo (SMC) techniques, also known as particle methods, are a well-known class of parameter estimation methods for this general class of state-space models. Existing SMC-based techniques rely on excessive sampling of the parameter space, which makes their computation intractable for large systems or tall data sets. Bayesian optimization techniques have been used for fast inference in state-space models with intractable likelihoods. These techniques aim to find the maximum of the likelihood function by sequential sampling of the parameter space through a single SMC approximator. Various SMC approximators with different fidelities and computational costs are often available for sample-based likelihood approximation. In this paper, we propose a multi-fidelity Bayesian optimization algorithm for the inference of general nonlinear state-space models (MFBO-SSM), which enables simultaneous sequential selection of parameters and approximators. The accuracy and speed of the algorithm are demonstrated by numerical experiments using synthetic gene expression data from a gene regulatory network model and real data from the VIX stock price index.
Numerical simulation models to support decision-making and policy-making processes are often complex, involving many disciplines, many inputs, and long computation times. Inputs to such models are inherently uncertain, leading to uncertainty in model outputs. Characterizing, propagating, and analyzing this uncertainty is critical both to model development and to the effective application of model results in a decision-making setting; however, the many thousands of model evaluations required to sample the uncertainty space (e.g., via Monte Carlo sampling) present an intractable computational burden. This paper presents a novel surrogate modeling methodology designed specifically for propagating uncertainty from model inputs to model outputs and for performing a global sensitivity analysis, which characterizes the contributions of uncertainties in model inputs to output variance, while maintaining the quantitative rigor of the analysis by providing confidence intervals on surrogate predictions. The approach is developed for a general class of models and is demonstrated on an aircraft emissions prediction model that is being developed and applied to support aviation environmental policy-making. The results demonstrate how the confidence intervals on surrogate predictions can be used to balance the tradeoff between computation time and uncertainty in the estimation of the statistical outputs of interest.
This paper discusses the development and comparison of two emissions modeling methods for predicting NOx and CO emissions from aircraft gas turbine combustors. We compare an empirical and a physics-based approach. The objective is to assess the strengths and weaknesses of the methods for predicting the emissions of current and potential future gas turbine engines for the purpose of assessing design tradeoffs and interdependencies in a policy-making setting. The empirical method is based on a P3-T3 approach using polynomial fits to certification data. The physics-based method is developed using high-level combustor design parameters and ideal reactors. The predictive capability of each method is assessed by comparing model estimates of NOx and CO emissions to certification data from three different industry combustors.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.