Under multiplicative drift and other regularity conditions, it is established that the asymptotic variance associated with a particle filter approximation of the prediction filter is bounded uniformly in time, and the nonasymptotic, relative variance associated with a particle approximation of the normalizing constant is bounded linearly in time. The conditions are demonstrated to hold for some hidden Markov models on noncompact state spaces. The particle stability results are obtained by proving $v$-norm multiplicative stability and exponential moment results for the underlying Feynman-Kac formulas.Comment: Published in at http://dx.doi.org/10.1214/12-AAP909 the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org
We introduce a general form of sequential Monte Carlo algorithm defined in terms of a parameterized resampling mechanism. We find that a suitably generalized notion of the Effective Sample Size (ESS), widely used to monitor algorithm degeneracy, appears naturally in a study of its convergence properties. We are then able to phrase sufficient conditions for time-uniform convergence in terms of algorithmic control of the ESS, in turn achievable by adaptively modulating the interaction between particles. This leads us to suggest novel algorithms which are, in senses to be made precise, provably stable and yet designed to avoid the degree of interaction which hinders parallelization of standard algorithms. As a byproduct, we prove time-uniform convergence of the popular adaptive resampling particle filter.
This paper addresses the problem of estimating the Potts parameter β jointly with the unknown parameters of a Bayesian model within a Markov chain Monte Carlo (MCMC) algorithm. Standard MCMC methods cannot be applied to this problem because performing inference on β requires computing the intractable normalizing constant of the Potts model. In the proposed MCMC method the estimation of β is conducted using a likelihood-free Metropolis-Hastings algorithm. Experimental results obtained for synthetic data show that estimating β jointly with the other unknown parameters leads to estimation results that are as good as those obtained with the actual value of β. On the other hand, assuming that the value of β is known can degrade estimation performance significantly if this value is incorrect. To illustrate the interest of this method, the proposed algorithm is successfully applied to real bidimensional SAR and tridimensional ultrasound images. Index TermsPotts-Markov field, Mixture model, Bayesian estimation, Gibbs sampler, Intractable normalizing constants.arXiv:1207.5355v1 [stat.CO] 23 Jul 2012 resulting in the so-called pseudo-likelihood estimators [20]. Although analytically convenient this approach generally does not lead to a satisfactory posterior density and results in poor estimation [21]. Also, as noticed in [18] such a prior distribution generally depends on the data since the normalizing constant C(β) depends implicitly on the number of observations (priors that depend on the data are not recommended in the Bayesian paradigm [22, p. 36]). B. Approximation of C(β)Another possibility is to approximate the normalizing constant C(β). Existing approximations can be classified into three categories: based on analytical developments, on sampling strategies or on a combination of both. A survey of the state-of-the-art approximation methods up to 2004 has been presented in [18]. The methods considered in [18] are the mean field, the tree-structured mean field and the Bethe energy (loopy Metropolis) approximations, as well as two sampling strategies based on Langevin MCMC algorithms. More recently, exact recursive expressions have been proposed to compute C(β) analytically [9]. However, to our knowledge, these recursive methods have only been successfully applied to small problems (i.e., for MRFs of size smaller than 40 × 40) with reduced spatial correlation β < 0.5.Another sampling-based approximation consists in estimating C(β) by Monte Carlo integration [23, Chap. 3], at the expense of very substantial computation and possibly biased estimations (bias arises from the estimation error of C(β)). Better results can be obtained by using importance or path sampling methods [24]. These methods have been applied to the estimation of β within an MCMC image processing algorithm in [17]. Although more precise than Monte Carlo integration, approximating C(β) by importance or path sampling still requires substantial computation and is generally unfeasible for large fields. This has motivated recent works that reduce computati...
We investigate sampling laws for particle algorithms and the influence of these laws on the efficiency of particle approximations of marginal likelihoods in hidden Markov models. Among a broad class of candidates we characterize the essentially unique family of particle system transition kernels which is optimal with respect to an asymptotic-in-time variance growth rate criterion. The sampling structure of the algorithm defined by these optimal transitions turns out to be only subtly different from standard algorithms and yet the fluctuation properties of the estimates it provides can be dramatically different. The structure of the optimal transition suggests a new class of algorithms, which we term "twisted" particle filters and which we validate with asymptotic analysis of a more traditional nature, in the regime where the number of particles tends to infinity.Comment: Published in at http://dx.doi.org/10.1214/13-AOS1167 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org
We study convergence and convergence rates for resampling schemes. Our first main result is a general consistency theorem based on the notion of negative association, which is applied to establish the almost sure weak convergence of measures output from Kitagawa's (1996) stratified resampling method. Carpenter et al's (1999) systematic resampling method is similar in structure but can fail to converge depending on the order of the input samples. We introduce a new resampling algorithm based on a stochastic rounding technique of Srinivasan (2001), which shares some attractive properties of systematic resampling, but which exhibits negative association and therefore converges irrespective of the order of the input samples. We confirm a conjecture made by Kitagawa (1996) that ordering input samples by their states in R yields a faster rate of convergence; we establish that when particles are ordered using the Hilbert curve in R d , the variance of the resampling error is O(N −(1+1/d) ) under mild conditions, where N is the number of particles. We use these results to establish asymptotic properties of particle algorithms based on resampling schemes that differ from multinomial resampling.
This paper concerns numerical assessment of Monte Carlo error in particle filters. We show that by keeping track of certain key features of the genealogical structure arising from resampling operations, it is possible to estimate variances of a number of standard Monte Carlo approximations which particle filters deliver. All our estimators can be computed from a single run of a particle filter with no further simulation. We establish that as the number of particles grows, our estimators are weakly consistent for asymptotic variances of the Monte Carlo approximations and some of them are also non-asymptotically unbiased. The asymptotic variances can be decomposed into terms corresponding to each time step of the algorithm, and we show how to consistently estimate each of these terms. When the number of particles may vary over time, this allows approximation of the asymptotically optimal allocation of particle numbers.
Optimal Bayesian multi-target filtering is, in general, computationally impractical owing to the high dimensionality of the multi-target state. The Probability Hypothesis Density (PHD) filter propagates the first moment of the multi-target posterior distribution. While this reduces the dimensionality of the problem, the PHD filter still involves intractable integrals in many cases of interest. Several authors have proposed Sequential Monte Carlo (SMC) implementations of the PHD filter. However, these implementations are the equivalent of the Bootstrap Particle Filter, and the latter is well known to be inefficient. Drawing on ideas from the Auxiliary Particle Filter (APF), we present a SMC implementation of the PHD filter which employs auxiliary variables to enhance its efficiency. Numerical examples are presented for two scenarios, including a challenging nonlinear observation model.
This paper addresses finite sample stability properties of sequential Monte Carlo methods for approximating sequences of probability distributions. The results presented herein are applicable in the scenario where the start and end distributions in the sequence are fixed and the number of intermediate steps is a parameter of the algorithm. Under assumptions which hold on non-compact spaces, it is shown that the effect of the initial distribution decays exponentially fast in the number of intermediate steps and the corresponding stochastic error is stable in Lp norm.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.