Tau-leaping is a popular discretization method for generating approximate paths of continuous time, discrete space, Markov chains, notably for biochemical reaction systems. To compute expected values in this context, an appropriate multilevel Monte Carlo form of tau-leaping has been shown to improve efficiency dramatically. In this work we derive new analytic results concerning the computational complexity of multilevel Monte Carlo tau-leaping that are significantly sharper than previous ones. We avoid taking asymptotic limits, and focus on a practical setting where the system size is large enough for many events to take place along a path, so that exact simulation of paths is expensive, making tau-leaping an attractive option. We use a general scaling of the system components that allows for the reaction rate constants and the abundances of species to vary over several orders of magnitude, and we exploit the random time change representation developed by Kurtz. The key feature of the analysis that allows for the sharper bounds is that when comparing relevant pairs of processes we analyze the variance of their difference directly rather than bounding via the second moment. Use of the second moment is natural in the setting of a diffusion equation, where multilevel was first developed and where strong convergence results for numerical methods are readily available, but is not optimal for the Poisson-driven jump systems that we consider here. We also present computational results that illustrate the new analysis.
IntroductionMany modeling scenarios give rise to continuous-time, discrete-space Markov chains. Notable application areas include chemistry, systems biology, epidemiology, population dynamics, queuing theory and several branches of physics [9,17,26,27,28]. It is straightforward to simulate sample paths for this class of processes, but in many realistic contexts the computational cost of performing Monte Carlo is prohibitive. This work focuses on the commonly arising task of computing an expected value of some feature of the solution, for example the mean level of a chemical species at some specified time or the correlation *
Estimation of the human pose from a monocular camera has been an emerging research topic in the computer vision community with many applications. Recently, benefiting from the deep learning technologies, a significant amount of research efforts have advanced the monocular human pose estimation both in 2D and 3D areas. Although there have been some works to summarize different approaches, it still remains challenging for researchers to have an in-depth view of how these approaches work from 2D to 3D. In this paper, we provide a comprehensive and holistic 2D-to-3D perspective to tackle this problem. Firstly, we comprehensively summarize the 2D and 3D representations of human body. Then we summarize the mainstream and milestone approaches for these human body presentations since the year 2014 under unified frameworks. Especially, we provide insightful analyses for the intrinsic connections and methods evolution from 2D to 3D pose estimation. Furthermore, we analyze the solutions for challenging cases, such as the lack of data, the inherent ambiguity between 2D and 3D, and the complex multi-person scenarios. Next, we summarize the benchmarks, evaluation metrics, and the quantitative performance of popular approaches. Finally, we discuss the challenges and give deep thinking of promising directions for future research. We believe this survey will provide the readers (researchers, engineers, developers, etc.) with a deep and insightful understanding of monocular human pose estimation.
We consider the problem of numerically estimating expectations of solutions to stochastic differential equations driven by Brownian motions in the commonly occurring small noise regime. We consider (i) standard Monte Carlo methods combined with numerical discretization algorithms tailored to the small noise setting, and (ii) a multilevel Monte Carlo method combined with a standard Euler-Maruyama implementation. Under the assumptions we make on the underlying model, the multilevel method combined with Euler-Maruyama is often found to be the most efficient option. Moreover, under a wide range of scalings the multilevel method is found to give the same asymptotic complexity that would arise in the idealized case where we have access to exact samples of the required distribution at a cost of O(1) per sample. A key step in our analysis is to analyze the variance between two coupled paths directly, as opposed to their L 2 distance. Careful simulations are provided to illustrate the asymptotic results.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.