In this paper, we address the problem of finding the simulated system with the best (maximum or minimum) expected performance when the number of alternatives is finite, but large enough that ranking-and-selection (R&S) procedures may require too much computation to be practical. Our approach is to use the data provided by the first stage of sampling in an R&S procedure to screen out alternatives that are not competitive, and thereby avoid the (typically much larger) second-stage sample for these systems. Our procedures represent a compromise between standard R&S procedures-which are easy to implement, but can be computationally inefficient-and fully sequential procedures-which can be statistically efficient, but are more difficult to implement and depend on more restrictive assumptions. We present a general theory for constructing combined screening and indifference-zone selection procedures, several specific procedures and a portion of an extensive empirical evaluation.
This paper develops a model for allocating cross-trained workers at the beginning of a shift in a multidepartment service environment. It assumes departments are trying to maximize objective functions that are concave with respect to the number of workers assigned. Worker capabilities are described by parameters that range from zero to one, with fractional values representing workers who are less than fully qualified. The nonlinear programming model presented is a variant of the generalized assignment problem. The model is used in a series of experiments to investigate the value of cross-utilization as a function of factors such as demand variability and levels of cross-training. Results show that the benefits of cross-utilization can be substantial, and in many cases a small degree of cross-training can capture most of the benefits. Beyond a certain amount additional cross-training adds little additional value, and the preferred amount depends heavily on the level of demand variability.manpower scheduling, service operations management, mathematical programming
We introduce ASAP3, a refinement of the batch means algorithms ASAP and ASAP2, that delivers point and confidence-interval estimators for the expected response of a steady-state simulation. ASAP3 is a sequential procedure designed to produce a confidence-interval estimator that satisfies user-specified requirements on absolute or relative precision as well as coverage probability. ASAP3 operates as follows: the batch size is progressively increased until the batch means pass the Shapiro-Wilk test for multivariate normality; and then ASAP3 fits a first-order autoregressive (AR(1)) time series model to the batch means. If necessary, the batch size is further increased until the autoregressive parameter in the AR(1) model does not significantly exceed 0.8. Next, ASAP3 computes the terms of an inverse Cornish-Fisher expansion for the classical batch means t -ratio based on the AR(1) parameter estimates; and finally ASAP3 delivers a correlation-adjusted confidence interval based on this expansion. Regarding not only conformance to the precision and coverage-probability requirements but also the mean and variance of the half-length of the delivered confidence interval, ASAP3 compared favorably to other batch means procedures (namely, ABATCH, ASAP, ASAP2, and LBATCH) in an extensive experimental performance evaluation.
BackgroundIn 2009 and the early part of 2010, the northern hemisphere had to cope with the first waves of the new influenza A (H1N1) pandemic. Despite high-profile vaccination campaigns in many countries, delays in administration of vaccination programs were common, and high vaccination coverage levels were not achieved. This experience suggests the need to explore the epidemiological and economic effectiveness of additional, reactive strategies for combating pandemic influenza.MethodsWe use a stochastic model of pandemic influenza to investigate realistic strategies that can be used in reaction to developing outbreaks. The model is calibrated to documented illness attack rates and basic reproductive number (R0) estimates, and constructed to represent a typical mid-sized North American city.ResultsOur model predicts an average illness attack rate of 34.1% in the absence of intervention, with total costs associated with morbidity and mortality of US$81 million for such a city. Attack rates and economic costs can be reduced to 5.4% and US$37 million, respectively, when low-coverage reactive vaccination and limited antiviral use are combined with practical, minimally disruptive social distancing strategies, including short-term, as-needed closure of individual schools, even when vaccine supply-chain-related delays occur. Results improve with increasing vaccination coverage and higher vaccine efficacy.ConclusionsSuch combination strategies can be substantially more effective than vaccination alone from epidemiological and economic standpoints, and warrant strong consideration by public health authorities when reacting to future outbreaks of pandemic influenza.
We present a stochastic model of the daily operations of an airline. Its primary purpose is to evaluate plans, such as crew schedules, as well as recovery policies in a random environment. We describe the structure of the stochastic model, sources of disruptions, recovery policies, and performance measures. Then, we describe SimAir—our simulation implementation of the stochastic model, and we give computational results. Finally, we give future directions for the study of airline recovery policies and planning under uncertainty.
We present and evaluate three ranking-and-selection procedures for use in steady-state simulation experiments when the goal is to find which among a finite number of alternative systems has the largest or smallest long-run average performance. All three procedures extend existing methods for independent and identically normally distributed observations to general stationary output processes, and all procedures are sequential. We also provide our thoughts about the evaluation of simulation design and analysis procedures, and illustrate these concepts in our evaluation of the new procedures.
When designing steady-state computer simulation experiments, one may be faced with the choice of batching observations in one long run or replicating a number of smaller runs. Both methods are potentially useful in the course of undertaking simulation output analysis. The tradeoffs between the two alternatives are well known: batching ameliorates the effects of initialization bias, but produces batch means that might be correlated; replication yields independent sample means, but may suffer from initialization bias at the beginning of each of the runs. We present several new results and specific examples to lend insight as to when one method might be preferred over the other. In steady-state, batching and replication perform similarly in terms of estimating the mean and variance parameter, but replication tends to do better than batching with regard to the performance of confidence intervals for the mean. Such a victory for replication may be hollow, for in the presence of an initial transient, batching often performs better than replication when it comes to point and confidence-interval estimation of the steady-state mean. We conclude-like other classic references-that in the context of estimation of the steady-state mean, batching is typically the wiser approach.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.