2016
DOI: 10.1007/978-3-319-33507-0_10
|View full text |Cite
|
Sign up to set email alerts
|

Central Limit Theorem for Adaptive Multilevel Splitting Estimators in an Idealized Setting

Abstract: The Adaptive Multilevel Splitting (AMS) algorithm is a powerful and versatile iterative method to estimate the probabilities of rare events. We prove a new central limit theorem for the associated AMS estimators introduced in [5], and which have been recently revisited in [3]-the main result there being (nonasymptotic) unbiasedness of the estimators. To prove asymptotic normality, we rely on and extend the technique presented in [3]: the (asymptotic) analysis of an integral equation. Numerical simulations illu… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
24
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 9 publications
(26 citation statements)
references
References 15 publications
2
24
0
Order By: Relevance
“…To analyze how efficient the algorithm is depending on n and k, one can look at the variance; more precisely, in the regime when a and k are fixed and n → +∞, a Central Limit Theorem holds, see [16]: Theorem 3.2. We have the following convergence in law, when n → +∞, and for fixed k and a (such that p = P(X > a) > 0):…”
Section: Theoretical Resultsmentioning
confidence: 99%
See 4 more Smart Citations
“…To analyze how efficient the algorithm is depending on n and k, one can look at the variance; more precisely, in the regime when a and k are fixed and n → +∞, a Central Limit Theorem holds, see [16]: Theorem 3.2. We have the following convergence in law, when n → +∞, and for fixed k and a (such that p = P(X > a) > 0):…”
Section: Theoretical Resultsmentioning
confidence: 99%
“…The fundamental argument in [15] and [16] is the embedding of the estimation problem of p into the problem of estimating the conditional probability p(x) = P(X > a|X > x) for any x ∈ [0, a]; for that purpose, we introduce the estimatorp n,k (x) obtained thanks to Algorithm 1 where in the initialization step the random variables are generated according to L(X|X > x). Then Theorem 3.1 is a consequence of two facts:…”
Section: Theoretical Resultsmentioning
confidence: 99%
See 3 more Smart Citations