This paper compares two methods of assessing variability in simulation output. The methods make specific allowance for two sources of variation: that caused by uncertainty in estimating unknown input parameters (parameter uncertainty), and that caused by the inclusion of random variation within the simulation model itself (simulation uncertainty). The first method is based on classical statistical differential analysis; we show explicitly that, under general conditions, the two sources contribute separately to the total variation.In the classical approach, certain sensitivity coefficients have to be estimated. The effort needed to do this becomes progressively more expensive, increasing linearly with the number of unknown parameters. Moreover there is an additional difficulty of detecting spurious variation when the number of parameters is large. It is shown that a parametric form of bootstrap sampling provides an alternative method which does not suffer from either problem.For illustration, simulation of the operation of a small (4-node) computer communication network is used to compare the accuracy of estimates using the two methods. A larger, more realistic, (30-node) network is presented showing how the bootstrap method becomes competitive when the number of unknown parameters is large.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.