“…The difference between the two algorithms is that the stochastic time steps may use different meshes for each realization, based on successive Brownian bridge sampling of a Brownian motion realization, while the deterministic time steps use the same mesh for all realizations of the Brownian motion W = W 1 W 0 . The construction and the analysis of the adaptive algorithms are inspired by the related work of Moon et al [32], on adaptive algorithms for deterministic ordinary differential equations, and the error estimates from Szepessy et al [39].…”
Section: Convergence Rates For Adaptive Approximation 513mentioning
confidence: 99%
“…Based on Morin et al [33], the work of Binev et al [7] and Stevenson [38] extend the analysis of Cohen et al [10] to include finite element approximation. The work of Moon et al [32] connects DeVore's smoothness conditions to error densities for adaptive a.s. convergence of the approximate solution, X, as the maximal step size tends to zero. Although the time steps are not adapted to the standard filtration generated by only W for the stochastic time stepping algorithm, the work of Szepessy et al [39] proved that the corresponding approximate solution converges to the correct adapted solution X.…”
Section: Convergence Rates For Adaptive Approximation 513mentioning
confidence: 99%
“…Although the work Szepessy et al [39] proposed adaptive algorithms, the main focus in that work was on error estimates. Properties regarding the stopping, efficiency, and accuracy of the adaptive algorithms, following the ideas in Moon et al [32] are first studied here. Assume that the process X satisfies (1) and its approximation, X, is given by (2), then the error expansions in Theorems 1.2 and 2.2 of Szepessy et al [39] have the form…”
Section: A Posteriori Error Expansionmentioning
confidence: 99%
“…To achieve (31) for each realization, start with an intial partition t 1 and then specify iteratively a new partition t k + 1 , from t k , using the following refinement strategy: For each realization in the mth batch, for each time step n = 1 2 N k if r n k ≤ s 1 TOL T N M − 1 then divide t n k into H = 2 uniform substeps else let the new step be the same as the old endif endfor (32) The refinement strategy (32) motivates the following stopping criteria: For each realization of the mth batch,…”
Section: Convergence Rates For Stochastic Time Stepsmentioning
confidence: 99%
“…Suppose the adaptive algorithm uses the strategy (32) and (33). Assume that c satisfies (36) for the time steps corresponding to the maximal error indicator on each refinement level, and that…”
Section: Stopping Of the Adaptive Algorithmmentioning
to a problem independent factor defined in the algorithm. Numerical examples illustrate the behavior of the adaptive algorithms, motivating when stochastic and deterministic adaptive time steps are more efficient than constant time steps and when adaptive stochastic steps are more efficient than adaptive deterministic steps.
“…The difference between the two algorithms is that the stochastic time steps may use different meshes for each realization, based on successive Brownian bridge sampling of a Brownian motion realization, while the deterministic time steps use the same mesh for all realizations of the Brownian motion W = W 1 W 0 . The construction and the analysis of the adaptive algorithms are inspired by the related work of Moon et al [32], on adaptive algorithms for deterministic ordinary differential equations, and the error estimates from Szepessy et al [39].…”
Section: Convergence Rates For Adaptive Approximation 513mentioning
confidence: 99%
“…Based on Morin et al [33], the work of Binev et al [7] and Stevenson [38] extend the analysis of Cohen et al [10] to include finite element approximation. The work of Moon et al [32] connects DeVore's smoothness conditions to error densities for adaptive a.s. convergence of the approximate solution, X, as the maximal step size tends to zero. Although the time steps are not adapted to the standard filtration generated by only W for the stochastic time stepping algorithm, the work of Szepessy et al [39] proved that the corresponding approximate solution converges to the correct adapted solution X.…”
Section: Convergence Rates For Adaptive Approximation 513mentioning
confidence: 99%
“…Although the work Szepessy et al [39] proposed adaptive algorithms, the main focus in that work was on error estimates. Properties regarding the stopping, efficiency, and accuracy of the adaptive algorithms, following the ideas in Moon et al [32] are first studied here. Assume that the process X satisfies (1) and its approximation, X, is given by (2), then the error expansions in Theorems 1.2 and 2.2 of Szepessy et al [39] have the form…”
Section: A Posteriori Error Expansionmentioning
confidence: 99%
“…To achieve (31) for each realization, start with an intial partition t 1 and then specify iteratively a new partition t k + 1 , from t k , using the following refinement strategy: For each realization in the mth batch, for each time step n = 1 2 N k if r n k ≤ s 1 TOL T N M − 1 then divide t n k into H = 2 uniform substeps else let the new step be the same as the old endif endfor (32) The refinement strategy (32) motivates the following stopping criteria: For each realization of the mth batch,…”
Section: Convergence Rates For Stochastic Time Stepsmentioning
confidence: 99%
“…Suppose the adaptive algorithm uses the strategy (32) and (33). Assume that c satisfies (36) for the time steps corresponding to the maximal error indicator on each refinement level, and that…”
Section: Stopping Of the Adaptive Algorithmmentioning
to a problem independent factor defined in the algorithm. Numerical examples illustrate the behavior of the adaptive algorithms, motivating when stochastic and deterministic adaptive time steps are more efficient than constant time steps and when adaptive stochastic steps are more efficient than adaptive deterministic steps.
Adaptive time-stepping methods based on the Monte Carlo Euler method for weak approximation of Itô stochastic differential equations are developed. The main result is new expansions of the computational error, with computable leading-order term in a posteriori form, based on stochastic flows and discrete dual backward problems. The expansions lead to efficient and accurate computation of error estimates. Adaptive algorithms for either stochastic time steps or deterministic time steps are described. Numerical examples illustrate when stochastic and deterministic adaptive time steps are superior to constant time steps and when adaptive stochastic steps are superior to adaptive deterministic steps. Stochastic time steps use Brownian bridges and require more work for a given number of time steps. Deterministic time steps may yield more time steps but require less work; for example, in the limit of vanishing error tolerance, the ratio of the computational error and its computable estimate tends to 1 with negligible additional work to determine the adaptive deterministic time steps.
A variational principle, inspired by optimal control, yields a simple derivation of an error representation, global error = local error · weight, for general approximation of functions of solutions to ordinary differential equations. This error representation is then approximated by a sum of computable error indicators, to obtain a useful global error indicator for adaptive mesh refinements. A uniqueness formulation is provided for desirable error representations of adaptive algorithms. (2000): 65L70, 65G50
Mathematics Subject Classification
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.