2005
DOI: 10.1214/105051604000001016
|View full text |Cite
|
Sign up to set email alerts
|

Dynamic importance sampling for uniformly recurrent Markov chains

Abstract: Importance sampling is a variance reduction technique for efficient estimation of rare-event probabilities by Monte Carlo. In standard importance sampling schemes, the system is simulated using an a priori fixed change of measure suggested by a large deviation lower bound analysis. Recent work, however, has suggested that such schemes do not work well in many situations. In this paper we consider dynamic importance sampling in the setting of uniformly recurrent Markov chains. By "dynamic" we mean that in the c… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
60
0

Year Published

2005
2005
2018
2018

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 46 publications
(60 citation statements)
references
References 36 publications
0
60
0
Order By: Relevance
“…0 (Sadowsky and Bucklew, 1990;Dupuis and Wang, 2005) The examples described in Sections 3-5 involve word families that can be characterized as V m . We may also include an additional subscript m to a previously defined quantity to highlight its dependence on m, for example p m , q m , b m and n m .…”
Section: The Relative Error (Re) Of a Monte Carlo Estimatormentioning
confidence: 99%
“…0 (Sadowsky and Bucklew, 1990;Dupuis and Wang, 2005) The examples described in Sections 3-5 involve word families that can be characterized as V m . We may also include an additional subscript m to a previously defined quantity to highlight its dependence on m, for example p m , q m , b m and n m .…”
Section: The Relative Error (Re) Of a Monte Carlo Estimatormentioning
confidence: 99%
“…To construct our optimal IS algorithms we use an optimality result from [16] which was obtained using the optimal control/subsolution approach to IS of [12,3,4,6,5]. This result states that to construct optimal IS algorithms for the simulation of a wide range of buffer overflow events of any stable Jackson network it is sufficient to build appropriate smooth subsolutions to a Hamilton Jacobi Bellman (HJB) equation and its boundary conditions (these are given in (7) in the context we study in the current paper).…”
Section: Introductionmentioning
confidence: 99%
“…It was established in [4,5] that importance sampling algorithms for estimating rare-event probabilities or functionals that are largely determined by rare-events are closely related to deterministic differential games. More precisely, the asymptotic optimal performance of importance sampling schemes can be characterized by the value function of a two-person zero-sum differential game, which can in turn be characterized by the solution to the Isaacs equation (a nonlinear PDE) associated with the game.…”
Section: Introductionmentioning
confidence: 99%
“…More precisely, the asymptotic optimal performance of importance sampling schemes can be characterized by the value function of a two-person zero-sum differential game, which can in turn be characterized by the solution to the Isaacs equation (a nonlinear PDE) associated with the game. It was also discussed in [4,5] that one can construct asymptotically optimal importance sampling algorithms based on this solution.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation