We derive the precise asymptotic distributional behavior of Gaussian variational approximate estimators of the parameters in a singlepredictor Poisson mixed model. These results are the deepest yet obtained concerning the statistical properties of a variational approximation method. Moreover, they give rise to asymptotically valid statistical inference. A simulation study demonstrates that Gaussian variational approximate confidence intervals possess good to excellent coverage properties, and have a similar precision to their exact likelihood counterparts.An important practical ramification of our theory is asymptotically valid statistical inference for the model parameters. In particular, a form of studentization leads to theoretically justifiable confidence intervals for all model parameters. Unlike those based on the exact likelihood, all Gaussian variational approximate point estimates and confidence intervals can be computed without the need for numerical integration. Simulation results reveal that the confidence intervals have good to excellent coverage and have about the same length as exact likelihood-based intervals.Variational approximation methodology is now a major research area within computer science; see, for example, Chapter 10 of [3]. It is beginning to have a presence in statistics as well (e.g., [10,14]). A summary of the topic from a statistical perspective is given in [13]. Late 2008 saw the first beta release of a software library, Infer. NET [12], for facilitation of variational approximate inference. A high proportion of variational approximation methodology is framed within Bayesian hierarchical structures and offers itself as a faster alternative to Markov chain Monte Carlo methods. The chief driving force is applications where speed is at a premium and some accuracy can be sacrificed. Examples of such applications are cluster analysis of gene-expression data [17], fitting spatial models to neuroimage data [6], image segmentation [4] and genome-wide association analysis [8]. Other recent developments in approximate Bayesian inference include approximate Bayesian computing (e.g., [2]), expectation propagation (e.g., [11]), integrated nested Laplace approximation (e.g., [16]) and sequential Monte Carlo (e.g., [5]).As explained in [3] and [13], there are many types of variational approximations. The most popular is variational Bayes (also known as mean field approximation), which relies on product restrictions applied to the joint posterior densities of a Bayesian model. The present article is concerned with Gaussian variational approximation in frequentist models containing random effects. There are numerous models of this general type. One of their hallmarks is the difficulty of exact likelihood-based inference for the model parameters due to presence of nonanalytic integrals. Generalized linear mixed models (e.g., Chapter 7 of [9]) form a large class of models for handling within-group correlation when the response variable is non-Gaussian. The simple Poisson mixed model lies within this...
A fast mean field variational Bayes (MFVB) approach to nonparametric regression when the predictors are subject to classical measurement error is investigated. It is shown that the use of such technology to the measurement error setting achieves reasonable accuracy. In tandem with the methodological development, a customized Markov chain Monte Carlo method is developed to facilitate the evaluation of accuracy of the MFVB method.
DedicationPeter Hall was a mentor to the first author and role model for both authors, and this paper is dedicated to his memory.Abstract: The functional linear model extends the notion of linear regression to the case where the response and covariates are iid elements of an infinite dimensional Hilbert space. The unknown to be estimated is a Hilbert-Schmidt operator, whose inverse is by definition unbounded, rendering the problem of inference ill-posed. In this paper, we consider the more general context where the sample of response/covariate pairs forms a weakly dependent stationary process in the respective product Hilbert space: simply stated, the case where we have a regression between functional time series. We consider a general framework of potentially nonlinear processes, expoiting recent advances in the spectral analysis of functional time series. This allows us to quantify the inherent ill-posedness, and motivate a Tikhonov regularisation technique in the frequency domain. Our main result is the rate of convergence for the corresponding estimators of the regression coefficients, the latter forming a summable sequence in the space of Hilbert-Schmidt operators. In a sense, our main result can be seen as a generalisation of the classical functional linear model rates, to the case of time series, and rests only upon Brillinger-type mixing conditions. It is seen that, just as the covariance operator eigenstructure plays a central role in the independent case, so does the spectral density operator's eigenstructure in the dependent case. While the analysis becomes considerably more involved in the dependent case, the rates are strikingly comparable to those of the i.i.d. case, but at the expense of an additional factor caused by the necessity to estimate the spectral density operator at a nonparametric rate, as opposed to the parametric rate for covariance operator estimation.MSC 2010 subject classifications: Primary 62M15, 62J07; secondary 45B05.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.