We propose a rigorous framework for Uncertainty Quantification (UQ) in which the UQ objectives and the assumptions/information set are brought to the forefront. This framework, which we call Optimal Uncertainty Quantification (OUQ), is based on the observation that, given a set of assumptions and information about the problem, there exist optimal bounds on uncertainties: these are obtained as values of well-defined optimization problems corresponding to extremizing probabilities of failure, or of deviations, subject to the constraints imposed by the scenarios compatible with the assumptions and information. In particular, this framework does not implicitly impose inappropriate assumptions, nor does it repudiate relevant information.Although OUQ optimization problems are extremely large, we show that under general conditions they have finite-dimensional reductions. As an application, we develop Optimal Concentration Inequalities (OCI) of Hoeffding and McDiarmid type. Surprisingly, these results show that uncertainties in input parameters, which propagate to output uncertainties in the classical sensitivity analysis paradigm, may fail to do so if the transfer functions (or probability distributions) are imperfectly known. We show how, for hierarchical structures, this phenomenon may lead to the non-propagation of uncertainties or information across scales.In addition, a general algorithmic framework is developed for OUQ and is tested on the Caltech surrogate model for hypervelocity impact and on the seismic safety assessment of truss structures, suggesting the feasibility of the framework for important complex systems.The introduction of this paper provides both an overview of the paper and a self-contained mini-tutorial about basic concepts and issues of UQ.
Key questions that scientists and engineers typically want to address can be formulated in terms of predictive science. Questions such as: "How well does my computational model represent reality?", "What are the most important parameters in the problem?", and "What is the best next experiment to perform?" are fundamental in solving scientific problems. mystic is a framework for massively-parallel optimization and rigorous sensitivity analysis that enables these motivating questions to be addressed quantitatively as global optimization problems. Often realistic physics, engineering, and materials models may have hundreds of input parameters, hundreds of constraints, and may require execution times of seconds or longer. In more extreme cases, realistic models may be multi-scale, and require the use of high-performance computing clusters for their evaluation. Predictive calculations, formulated as a global optimization over a potential surface in design parameter space, may require an already prohibitively large simulation to be performed hundreds, if not thousands, of times. The need to prepare, schedule, and monitor thousands of model evaluations, and dynamically explore and analyze results, is a challenging problem that requires a software infrastructure capable of distributing and managing computations on large-scale heterogeneous resources. In this paper, we present the design behind an optimization framework, and also a framework for heterogeneous computing, that when utilized together, can make computationally intractable sensitivity and optimization problems much more tractable. The optimization framework provides global search algorithms that have been extended to parallel, where evaluations of the model can be distributed to appropriate large-scale resources, while the optimizer centrally manages their interactions and navigates the objective function. New methods have been developed for imposing and solving constraints that aid in reducing the size and complexity of the optimization problem. Additionally, new algorithms have been developed that launch multiple optimizers in parallel, thus allowing highly efficient local search algorithms to provide fast global optimization. In this way, parallelism in optimization also can allow us to not only find global minima, but to simultaneously find all local minima and transition points --thus providing a much more efficient means of mapping out a potential energy surface.
Experimental results are described from a pushbroom imaging spectrometer module demonstrating very low levels of spectral and spatial distortion, at the level of a few percent of a pixel, and similarly small variation in spectral response function with field position. These spectrometer attributes make possible the extraction of accurate spectroscopic information. The spectrometer can achieve high levels of performance despite relaxed tolerances in fabrication and alignment. A quick and effective alignment method is described, that permits the spectrometer to approximate its design performance. The implications of the results on the calibration techniques of pushbroom imaging spectrometers are also discussed.
No abstract
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.