We investigate the frequentist posterior contraction rate of nonparametric
Bayesian procedures in linear inverse problems in both the mildly and severely
ill-posed cases. A theorem is proved in a general Hilbert space setting under
approximation-theoretic assumptions on the prior. The result is applied to
non-conjugate priors, notably sieve and wavelet series priors, as well as in
the conjugate setting. In the mildly ill-posed setting minimax optimal rates
are obtained, with sieve priors being rate adaptive over Sobolev classes. In
the severely ill-posed setting, oversmoothing the prior yields minimax rates.
Previously established results in the conjugate setting are obtained using this
method. Examples of applications include deconvolution, recovering the initial
condition in the heat equation and the Radon transform.Comment: 31 pages, minor correction to the proof of Proposition 3.
We investigate Bernstein-von Mises theorems for adaptive nonparametric Bayesian procedures in the canonical Gaussian white noise model. We consider both a Hilbert space and multiscale setting with applications in L 2 and L ∞ respectively. This provides a theoretical justification for plug-in procedures, for example the use of certain credible sets for sufficiently smooth linear functionals. We use this general approach to construct optimal frequentist confidence sets based on the posterior distribution. We also provide simulations to numerically illustrate our approach and obtain a visual representation of the geometries involved.
We study a mean-field spike and slab variational Bayes (VB) approximation to Bayesian model selection priors in sparse high-dimensional linear regression. Under compatibility conditions on the design matrix, oracle inequalities are derived for the mean-field VB approximation, implying that it converges to the sparse truth at the optimal rate and gives optimal prediction of the response vector. The empirical performance of our algorithm is studied, showing that it works comparably well as other state-of-the-art Bayesian variable selection methods. We also numerically demonstrate that the widely used coordinate-ascent variational inference algorithm can be highly sensitive to the parameter updating order, leading to potentially poor performance. To mitigate this, we propose a novel prioritized updating scheme that uses a data-driven updating order and performs better in simulations. The variational algorithm is implemented in the R package sparsevb. Supplementary materials for this article are available online.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.