We extend the basic theory of kriging, as applied to the design and analysis of deterministic computer experiments, to the stochastic simulation setting. Our goal is to provide flexible, interpolation-based metamodels of simulation output performance measures as functions of the controllable design or decision variables, or uncontrollable environmental variables. To accomplish this, we characterize both the intrinsic uncertainty inherent in a stochastic simulation and the extrinsic uncertainty about the unknown response surface. We use tractable examples to demonstrate why it is critical to characterize both types of uncertainty, derive general results for experiment design and analysis, and present a numerical example that illustrates the stochastic kriging method.
Variance-based global sensitivity analysis decomposes the variance of the output of a computer model, resulting from uncertainty about the model's inputs, into variance components associated with each input's contribution. The two most common variance-based sensitivity measures, the first-order effects and the total effects, may fail to sum to the total variance. They are often used together in sensitivity analysis, because neither of them adequately deals with interactions in the way the inputs affect the output. Therefore Owen proposed an alternative sensitivity measure, based on the concept of the Shapley value in game theory, and showed it always sums to the correct total variance if inputs are independent. We analyze Owen's measure, which we call the Shapley effect, in the case of dependent inputs. We show empirically how the first-order and total effects, even when used together, may fail to appropriately measure how sensitive the output is to uncertainty in the inputs when there is probabilistic dependence or structural interaction among the inputs. Because they involve all subsets of the inputs, Shapley effects could be expensive to compute if the number of inputs is large. We propose a Monte Carlo algorithm that makes accurate approximation of Shapley effects computationally affordable, and we discuss efficient allocation of the computation budget in this algorithm.
We prove fundamental theorems of asset pricing for good deal bounds in incomplete markets. These theorems relate arbitrage-freedom and uniqueness of prices for over-the-counter derivatives to existence and uniqueness of a pricing kernel that is consistent with market prices and the acceptance set of good deals. They are proved using duality of convex optimization in locally convex linear topological spaces. The concepts investigated are closely related to convex and coherent risk measures, exact functionals, and coherent lower previsions in the theory of imprecise probabilities.
Pricing financial options often requires Monte Carlo methods. One particular case is that of barrier options, whose payoff may be zero depending on whether or not an underlying asset crosses a barrier during the life of the option. This paper develops variance reduction techniques that take advantage of the special structure of barrier options, and are appropriate for general simulation problems with similar structure. We use a change of measure at each step of the simulation to reduce the variance arising from the possibility of a barrier crossing at each monitoring date. The paper details the theoretical underpinnings of this method, and evaluates alternative implementations when exact distributions conditional on one-step survival are available and when not available. When these one-step conditional distributions are unavailable, we introduce algorithms that combine change of measure and estimation of conditional probabilities simultaneously. The methods proposed are more generally applicable to terminal reward problems on Markov processes with absorbing states.
In a two-level nested simulation, an outer level of simulation samples scenarios, while the inner level uses simulation to estimate a conditional expectation given the scenario. Applications include financial risk management, assessing the effects of simulation input uncertainty, and computing the expected value of gathering more information in decision theory. We show that an ANOVA-like estimator of the variance of the conditional expectation is unbiased under mild conditions, and we discuss the optimal number of inner-level samples to minimize this estimator's variance given a fixed computational budget. We show that as the computational budget increases, the optimal number of inner-level samples remains bounded. This finding contrasts with previous work on two-level simulation problems in which the inner-and outer-level sample sizes must both grow without bound for the estimation error to approach zero. The finding implies that the variance of a conditional expectation can be estimated to arbitrarily high precision by a simulation experiment with a fixed inner-level computational effort per scenario, which we call a one-and-a-half-level simulation. Because the optimal number of innerlevel samples is often quite small, a one-and-a-half-level simulation can avoid the heavy computational burden typically associated with two-level simulation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.