Meyn and Tweedie is back! The bible on Markov chains in general state spaces has been brought up to date to reflect developments in the field since 1996 - many of them sparked by publication of the first edition. The pursuit of more efficient simulation algorithms for complex Markovian models, or algorithms for computation of optimal policies for controlled Markov models, has opened new directions for research on Markov chains. As a result, new applications have emerged across a wide range of topics including optimisation, statistics, and economics. New commentary and an epilogue by Sean Meyn summarise recent developments and references have been fully updated. This second edition reflects the same discipline and style that marked out the original and helped it to become a classic: proofs are rigorous and concise, the range of applications is broad and knowledgeable, and key ideas are accessible to practitioners with limited mathematical background.
In many settings in which Monte Carlo methods are applied, there may be no known algorithm for exactly generating the random object for which an expectation is to be computed. Frequently, however, one can generate arbitrarily close approximations to the random object. We introduce a simple randomization idea for creating unbiased estimators in such a setting based on a sequence of approximations. Applying this idea to computing expectations of path functionals associated with stochastic differential equations (SDEs), we construct finite variance unbiased estimators with a "square root convergence rate" for a general class of multidimensional SDEs. We then identify the optimal randomization distribution. Numerical experiments with various path functionals of continuous-time processes that often arise in finance illustrate the effectiveness of our new approach.
We consider the problem of optimal allocation of computing budget to maximize the probability of correct selection in the ordinal optimization setting. This problem has been studied in the literature in an approximate mathematical framework under the assumption that the underlying random variables have a Gaussian distribution. We use the large deviations theory to develop a mathematically rigorous framework for determining the optimal allocation of computing resources even when the underlying variables have general, non-Gaussian distributions. Further, in a simple setting we show that when there exists an indifference zone, quick stopping rules may be developed that exploit the exponential decay rates of the probability of false selection. In practice, the distributions of the underlying variables are estimated from generated samples leading to performance degradation due to estimation errors. On a positive note, we show that the corresponding estimates of optimal allocations converge to their true values as the number of samples used for estimation increases to infinity.
Importance sampling is one of the classical variance reduction techniques for increasing the efficiency of Monte Carlo algorithms for estimating integrals. The basic idea is to replace the original random mechanism in the simulation by a new one and at the same time modify the function being integrated. In this paper the idea is extended to problems arising in the simulation of stochastic systems. Discrete-time Markov chains, continuous-time Markov chains, and generalized semi-Markov processes are covered. Applications are given to a GI/G/1 queueing problem and response surface estimation. Computation of the theoretical moments arising in importance sampling is discussed and some numerical examples given.simulation, variance reduction, importance sampling
Let (Xn : n ≥ 0) be a sequence of i.i.d. r.v.'s with negative mean. Set S0 = 0 and define Sn = X1 + · · · + Xn. We propose an importance sampling algorithm to estimate the tail of M = max{Sn : n ≥ 0} that is strongly efficient for both light and heavy-tailed increment distributions. Moreover, in the case of heavy-tailed increments and under additional technical assumptions, our estimator can be shown to have asymptotically vanishing relative variance in the sense that its coefficient of variation vanishes as the tail parameter increases. A key feature of our algorithm is that it is state-dependent. In the presence of light tails, our procedure leads to Siegmund's (1979) algorithm. The rigorous analysis of efficiency requires new Lyapunov-type inequalities that can be useful in the study of more general importance sampling algorithms.We say that an unbiased simulation estimator RStrong efficiency implies that the number of simulation runs required to estimate P (M > b) to a given relative accuracy is bounded in b. A weaker criterion is logarithmic efficiency, which implies that the number of replications required to estimate P (M > b) with a given relative accuracy grows at rate o(| log P (M > b)|); see Asmussen and Glynn (2007), Juneja and Shahabuddin (2006) or Bucklew (2004), Section 5.2, for a discussion of efficiency in rare-event simulation. A strongly efficient estimator is said to exhibit asymptotically vanishing relative error when ER(b) 2 ∼ P (M > b) 2 as b ր ∞ (or, equivalently, when the coefficient of variation vanishes as b ր ∞).In this paper we develop an implementable state-dependent importance sampling algorithm that can be rigorously proved to possess asymptotically vanishing relative error. By "state-dependent," we mean that the importance sampling algorithm generates the next increment of the random walk from a distribution that depends on the walk's current state (i.e., location). This is the first strongly efficient algorithm that has been developed for estimating the tail of M in the presence of general heavy-tailed increment distributions. Prior efficient algorithms require the increment distribution to be of M/G/1 type with regularly varying or Weibull type right tails.A key idea is that our importance distribution is state-dependent. There is a long history of applications of state-dependent importance sampling to simulation problems. Perhaps the first related contributions are those by Hammersley and Morton (1954) and Rosenbluth and Rosenbluth (1955) in the context of molecular simulation; see also the text by Liu (2001) for applications of sequential importance sampling in various scientific contexts. However, a general framework for rigorous analysis of these types of algorithms is still under development. In a sequence of recent papers, Paul Dupuis and Hui Wang [see, e.g., Dupuis and Wang (2004)] have proposed a general methodology that can be applied in the presence of large deviations theory for light-tailed systems. Our paper contributes to this general literature by developing Lya...
Consider a single-server queue with a renewal arrival process and generally distributed processing times in which each customer independently reneges if service has not begun within a generally distributed amount of time. We establish that both the workload and queue-length processes in this system can be approximated by a regulated Ornstein-Uhlenbeck (ROU) process when the arrival rate is close to the processing rate and reneging times are large. We further show that a ROU process also approximates the queue-length process, under the same parameter assumptions, in a balking model. Our balking model assumes the queue-length is observable to arriving customers, and that each customer balks if his or her conditional expected waiting time is too large.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.