“…For further applications to Markov and semi-Markov processes, see e.g. Iscoe, Ney and Nummelin (1985), Ney and Nummelin (1987a), and Meyn and Tweedie (1993).…”
Section: Notation Hypotheses and Estimation Resultsmentioning
confidence: 99%
“…For the proofs, see Ney and Nummelin (1987a), Sections 3 and 4, and Iscoe, Ney and Nummelin (1985), Lemma 3.1.…”
Section: Nonnegative Kernels Eigenvalues and Eigenvectorsmentioning
confidence: 99%
“…Proof (i) Following Iscoe, Ney and Nummelin (1985), Lemma 3.4, introduce the generating function Nummelin (1984), Proposition 4.7 (i)]. Note by the construction of K M Q that individual terms and number of nonzero terms in the summand on the right of (4.21) are finite; consequently,…”
Section: Proof Of Theorem 31: Lower Boundmentioning
Let {(X n , S n ) : n = 0, 1, . . .} be a Markov additive process, where {X n } is a Markov chain on a general state space and S n is an additive component on R d . We consider P {S n ∈ A/ , some n} as → 0, where A ⊂ R d is open and the mean drift of {S n } is away from A. Our main objective is to study the simulation of P {S n ∈ A/ , some n} using the Monte Carlo technique of importance sampling. If the set A is convex, then we establish: (i) the precise dependence (as → 0) of the estimator variance on the choice of the simulation distribution; (ii) the existence of a unique simulation distribution which is efficient and optimal in the asymptotic sense of Siegmund (1976). We then extend our techniques to the case where A is not convex. Our results lead to positive conclusions which complement the multidimensional counterexamples of Glasserman and Wang (1997).
“…For further applications to Markov and semi-Markov processes, see e.g. Iscoe, Ney and Nummelin (1985), Ney and Nummelin (1987a), and Meyn and Tweedie (1993).…”
Section: Notation Hypotheses and Estimation Resultsmentioning
confidence: 99%
“…For the proofs, see Ney and Nummelin (1987a), Sections 3 and 4, and Iscoe, Ney and Nummelin (1985), Lemma 3.1.…”
Section: Nonnegative Kernels Eigenvalues and Eigenvectorsmentioning
confidence: 99%
“…Proof (i) Following Iscoe, Ney and Nummelin (1985), Lemma 3.4, introduce the generating function Nummelin (1984), Proposition 4.7 (i)]. Note by the construction of K M Q that individual terms and number of nonzero terms in the summand on the right of (4.21) are finite; consequently,…”
Section: Proof Of Theorem 31: Lower Boundmentioning
Let {(X n , S n ) : n = 0, 1, . . .} be a Markov additive process, where {X n } is a Markov chain on a general state space and S n is an additive component on R d . We consider P {S n ∈ A/ , some n} as → 0, where A ⊂ R d is open and the mean drift of {S n } is away from A. Our main objective is to study the simulation of P {S n ∈ A/ , some n} using the Monte Carlo technique of importance sampling. If the set A is convex, then we establish: (i) the precise dependence (as → 0) of the estimator variance on the choice of the simulation distribution; (ii) the existence of a unique simulation distribution which is efficient and optimal in the asymptotic sense of Siegmund (1976). We then extend our techniques to the case where A is not convex. Our results lead to positive conclusions which complement the multidimensional counterexamples of Glasserman and Wang (1997).
“…The paper [21] also shows that the rate function of the large deviation principle for {S n /n} is the convex conjugate of H, i.e.,…”
Section: Ldp For a Uniformly Recurrent Markov Chainmentioning
confidence: 99%
“…Fix any α ∈ R d . Then by [21], the non-negative kernel exp{hα, g(y)i}p(x, dy) admits a unique real eigenvalue exp{H(α)} and a unique (up to a multiplicative constant) eigenfunction r(x; α) in the sense that, for every x ∈ S,…”
Section: Ldp For a Uniformly Recurrent Markov Chainmentioning
Importance sampling is a variance reduction technique for efficient estimation of rare-event probabilities by Monte Carlo. In standard importance sampling schemes, the system is simulated using an a priori fixed change of measure suggested by a large deviation lower bound analysis. Recent work, however, has suggested that such schemes do not work well in many situations. In this paper, we consider adaptive importance sampling in the setting of uniformly recurrent Markov chains. By "adaptive," we mean that the change of measure depends on the history of the samples. Based on a control-theoretic approach to large deviations, the existence of asymptotically optimal adaptive schemes is demonstrated in great generality. In this framework, the difference between a static change of measure and an adaptive change of measure amounts to the difference between an open-loop control and a feed-back control. The implementation of the adaptive schemes is carried out with the help of a limiting Bellman equation. Also presented are numerical examples contrasting the adaptive and standard schemes.
Large deviations theory provides insight into the probabilities of rare events by developing quantifiable estimates for them. An example is the probability for a large overshoot of the total claim amount in a portfolio. Another example is the estimation of the ruin probability starting from a very large initial capital.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.