2020
DOI: 10.1615/int.j.uncertaintyquantification.2020032659
|View full text |Cite
|
Sign up to set email alerts
|

Variance Reduction Methods and Multilevel Monte Carlo Strategy for Estimating Densities of Solutions to Random Second-Order Linear Differential Equations

Abstract: This paper concerns the estimation of the density function of the solution to a random non-autonomous second-order linear differential equation with analytic data processes. In a recent contribution, we proposed to express the density function as an expectation, and we used a standard Monte Carlo algorithm to approximate the expectation. Although the algorithms worked satisfactorily for most test-problems, some numerical challenges emerged for others, due to large statistical errors. In these situations, the c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 28 publications
(79 reference statements)
0
1
0
Order By: Relevance
“…Finally, notice that, for a fixed M , the kernel density estimation and the parametric estimation have practically the same cost, since both rely on M realizations of Z 1 and Z 2 (the kernel density estimation also needs realizations of A, evaluates the kernel function and considers a bandwidth, while the parametric estimation evaluates f A ). Despite all the favorable properties of the parametric estimation method, there are some situations, which were not analyzed in [12], in which it may present slow convergence: since (2) involves random denominators, the variance of the random quantity inside the expectation may be high for some z, which produces "noise" that plagues the PDF estimate [17,18]. This issue is not observed for kernel density estimation.…”
Section: Alternative Formulationmentioning
confidence: 99%
“…Finally, notice that, for a fixed M , the kernel density estimation and the parametric estimation have practically the same cost, since both rely on M realizations of Z 1 and Z 2 (the kernel density estimation also needs realizations of A, evaluates the kernel function and considers a bandwidth, while the parametric estimation evaluates f A ). Despite all the favorable properties of the parametric estimation method, there are some situations, which were not analyzed in [12], in which it may present slow convergence: since (2) involves random denominators, the variance of the random quantity inside the expectation may be high for some z, which produces "noise" that plagues the PDF estimate [17,18]. This issue is not observed for kernel density estimation.…”
Section: Alternative Formulationmentioning
confidence: 99%