A new class of Markov chain Monte Carlo (MCMC) algorithms, based on simulating piecewise deterministic Markov processes (PDMPs), have recently shown great promise: they are non-reversible, can mix better than standard MCMC algorithms, and can use subsampling ideas to speed up computation in big data scenarios. However, current PDMP samplers can only sample from posterior densities that are differentiable almost everywhere, which precludes their use for model choice. Motivated by variable selection problems, we show how to develop reversible jump PDMP samplers that can jointly explore the discrete space of models and the continuous space of parameters. Our framework is general: it takes any existing PDMP sampler, and adds two types of trans-dimensional moves that allow for the addition or removal of a variable from the model. We show how the rates of these trans-dimensional moves can be calculated so that the sampler has the correct invariant distribution. Simulations show that the new samplers can mix better than standard MCMC algorithms. Our empirical results show they are also more efficient than gradient-based samplers that avoid model choice through use of continuous spike-and-slab priors which replace a point mass at zero for each parameter with a density concentrated around zero.
Stochastic computational models in the form of pure jump processes occur frequently in the description of chemical reactive processes, of ion channel dynamics, and of the spread of infections in populations. For spatially extended models, the computational complexity can be rather high such that approximate multiscale models are attractive alternatives. Within this framework some variables are described stochastically, while others are approximated with a macroscopic point value.We devise theoretical tools for analyzing the pathwise multiscale convergence of this type of variable splitting methods, aiming specifically at spatially extended models. Notably, the conditions we develop guarantee wellposedness of the approximations without requiring explicit assumptions of a priori bounded solutions. We are also able to quantify the effect of the different sources of errors, namely the multiscale error and the splitting error, respectively, by developing suitable error bounds. Computational experiments on selected problems serve to illustrate our findings.
This paper deals with the computation of the histogram of tensor images, that is, images where at each pixel is given a n × n positive definite symmetric matrix, SPD(n). An approach based on orthogonal series density estimation is introduced, which is particularly useful for the case of measures based on Riemannian metrics. By considering SPD(n) as the space of the covariance matrices of multivariate gaussian distributions, we obtain the corresponding density estimation for the measure of both the Fisher metric and the Wasserstein metric. Experimental results on the application of such histogram estimation to DTI image segmentation, texture segmentation and texture recognition are included.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations –citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.