Computational analysis of the performance, reliability, and safety of engineered systems is spreading rapidly in industry and government. To many managers, decision makers, and politicians not trained in computational simulation, computer simulations can appear most convincing. Terminology such as "virtual prototyping," "virtual testing," "full physics simulation," and "modeling and simulation-based acquisition" are extremely appealing when budgets are highly constrained; competitors are taking market share; or when political constraints do not allow testing of certain systems. To assess the accuracy and usefulness of computational simulations, three key aspects are needed in the analysis and experimental process: computer code and solution verification; experimental validation of most, if not all, of the mathematical models of the engineered system being simulated; and estimation of the uncertainty associated with analysis inputs, physics models, possible scenarios experienced by the system, and the outputs of interest in the simulation. The topics of verification and validation are not addressed here, but these are covered at length in the literature (see, for example, [1-6]). A number of fields have contributed to the development of uncertainty estimation techniques and procedures, such as nuclear reactor safety, underground storage of radioactive and toxic wastes, and structural dynamics (see, for example, [7][8][9][10][11][12][13][14][15][16][17][18]). only thought to be within specified intervals; for example, the parameters are estimated from expert opinion, not measurements. Assume each of these parameters is treated as a random variable and assigned the least informative distribution (i.e., a uniform distribution). If extreme system responses correspond to extreme values of these parameters (i.e., values near the ends of the uniform distribution), then their probabilistic combination could predict a very low probability for such extreme system responses. Given that the parameters are only known to occur within intervals, however, this conclusion is grossly inappropriate.
Improved Models for Epistemic UncertaintyDuring the past two decades, the information theory and expert systems communities have made significant progress in developing a number of new theories that can be pursued for modeling epistemic uncertainty. Examples of the newer theories include fuzzy set theory [17,[42][43][44][45][46], interval analysis [47,48], evidence (Dempster-Shafer) theory [49][50][51][52][53][54][55], possibility theory [56,57], and theory of upper and lower previsions [58]. Some of these theories only deal with epistemic uncertainty; most deal with both epistemic and aleatory uncertainty; and some deal with other varieties of uncertainty (e.g., nonclassical logics appropriate for artificial intelligence and data fusion systems [59]).A recent article summarizes how these theories of uncertainty are related to one another from a hierarchical viewpoint [60]. The article shows that evidence theory is a generalization of cl...