We introduce ASAP3, a refinement of the batch means algorithms ASAP and ASAP2, that delivers point and confidence-interval estimators for the expected response of a steady-state simulation. ASAP3 is a sequential procedure designed to produce a confidence-interval estimator that satisfies user-specified requirements on absolute or relative precision as well as coverage probability. ASAP3 operates as follows: the batch size is progressively increased until the batch means pass the Shapiro-Wilk test for multivariate normality; and then ASAP3 fits a first-order autoregressive (AR(1)) time series model to the batch means. If necessary, the batch size is further increased until the autoregressive parameter in the AR(1) model does not significantly exceed 0.8. Next, ASAP3 computes the terms of an inverse Cornish-Fisher expansion for the classical batch means t -ratio based on the AR(1) parameter estimates; and finally ASAP3 delivers a correlation-adjusted confidence interval based on this expansion. Regarding not only conformance to the precision and coverage-probability requirements but also the mean and variance of the half-length of the delivered confidence interval, ASAP3 compared favorably to other batch means procedures (namely, ABATCH, ASAP, ASAP2, and LBATCH) in an extensive experimental performance evaluation.
A distribution-free tabular CUSUM chart is designed to detect shifts in the mean of an autocorrelated process. The chart's average run length (ARL) is approximated by generalizing Siegmund's ARL approximation for the conventional tabular CUSUM chart based on independent and identically distributed normal observations. Control limits for the new chart are computed from the generalized ARL approximation. Also discussed are the choice of reference value and the use of batch means to handle highly correlated processes. The new chart is compared with other distribution-free procedures using stationary test processes with both normal and nonnormal marginals. IntroductionGiven a stochastic process to be monitored, a statistical process control (SPC) chart is used to detect any practically significant shift from the in-control status for that process, where the in-control status is defined as maintaining a specified target value for a given parameter of the monitored process-for example, the mean, the variance, or a quantile of the marginal distribution of the process. An SPC chart is designed to yield a specified value ARL 0 for the in-control average run length (ARL) of the chart-that is, the expected number of observations sampled from the in-control process before an out-of-control alarm is (incorrectly) raised. Given several alternative SPC charts whose control limits are determined in this way, one would prefer the chart with the smallest out-of-control average run length ARL 1 , a performance measure analogous to ARL 0 for the situation in which the monitored process is in a specific out-of-control condition. If the monitored process consists of independent and identically distributed (i.i.d.) normal random variables, then control limits can be determined analytically for some charts such as the Shewhart and tabular CUSUM charts as detailed in Montgomery (2001).It is more difficult to determine control limits for an SPC chart that is applied to an autocorrelated process; and much of the recent work on this problem has been focused on developing distribution-based (or model-based) SPC charts, which require one of the 2 following:1. The in-control and out-of-control versions of the monitored process must follow specific probability distributions.2. Certain characteristics of the monitored process-such as such as the first-and secondorder moments, including the entire autocovariance function-must be known.Moreover, the control limits for many distribution-based charts can only be determined by trial-and-error experimentation. Of course, if the underlying assumptions about the probability distributions describing the target process are violated, then these charts will not perform as advertised. Another limitation is that determining the control limits by trialand-error experimentation can be very inconvenient in practical applications-especially in circumstances that require rapid calibration of the chart and do not allow extensive preliminary experimentation on training data sets to estimate ARL 0 for various trial...
When designing steady-state computer simulation experiments, one may be faced with the choice of batching observations in one long run or replicating a number of smaller runs. Both methods are potentially useful in the course of undertaking simulation output analysis. The tradeoffs between the two alternatives are well known: batching ameliorates the effects of initialization bias, but produces batch means that might be correlated; replication yields independent sample means, but may suffer from initialization bias at the beginning of each of the runs. We present several new results and specific examples to lend insight as to when one method might be preferred over the other. In steady-state, batching and replication perform similarly in terms of estimating the mean and variance parameter, but replication tends to do better than batching with regard to the performance of confidence intervals for the mean. Such a victory for replication may be hollow, for in the presence of an initial transient, batching often performs better than replication when it comes to point and confidence-interval estimation of the steady-state mean. We conclude-like other classic references-that in the context of estimation of the steady-state mean, batching is typically the wiser approach.
We explore the issues of when and how to partition arriving customers into service groups that will be served separately, in a first-come first-served manner, by multiserver service systems having a provision for waiting, and how to assign an appropriate number of servers to each group. We assume that customers can be classified upon arrival, so that different service groups can have different service-time distributions. We provide methodology for quantifying the tradeoff between economies of scale associated with larger systems and the benefit of having customers with shorter service times separated from other customers with longer service times, as is done in service systems with express lines. To properly quantify this tradeoff, it is important to characterize service-time distributions beyond their means. In particular, it is important to also determine the variance of the service-time distribution of each service group. Assuming Poisson arrival processes, we then can model the congestion experienced by each server group as an M/G/squeue with unlimited waiting room. We use previously developed approximations for M/G/sperformance measures to quickly evaluate alternative partitions.queues, multiserver queues, service systems, service-system design, resource sharing, service systems with express lines
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.