2009
DOI: 10.1162/neco.2008.08-07-592
|View full text |Cite
|
Sign up to set email alerts
|

The Variational Gaussian Approximation Revisited

Abstract: The variational approximation of posterior distributions by multivariate Gaussians has been much less popular in the Machine Learning community compared to the corresponding approximation by factorising distributions. This is for a good reason: the Gaussian approximation is in general plagued by an O(N 2 ) number of variational parameters to be optimised, N being the number of random variables. In this work, we discuss the relationship between the Laplace and the variational approximation and we show that for … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

4
236
0

Year Published

2011
2011
2022
2022

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 227 publications
(243 citation statements)
references
References 5 publications
4
236
0
Order By: Relevance
“…Due to the high number of parameters this is both computationally intensive and slow to converge. For that reason, the authors in [34] have reparametrised the problem using a Gaussian approximation [7], involving O(T ) number of parameters and as a result the approach scales much better with the number of training data.…”
Section: Hyper-parameter Optimisationmentioning
confidence: 99%
See 1 more Smart Citation
“…Due to the high number of parameters this is both computationally intensive and slow to converge. For that reason, the authors in [34] have reparametrised the problem using a Gaussian approximation [7], involving O(T ) number of parameters and as a result the approach scales much better with the number of training data.…”
Section: Hyper-parameter Optimisationmentioning
confidence: 99%
“…In this case missing data prediction may also be achieved by means of the model predictive distribution. However, for many models this distribution is intractable and we resort to approximations such as variational Bayes (VB) [7], sampling methods [8] or maximum a-posteriori (MAP) [9]. The latter, although being a rough approximation, is easily applicable in a wider range of models than VB, is less computationally intensive than sampling and often provides a good enough estimate.…”
mentioning
confidence: 99%
“…We use a variational Gaussian (VG) approximate inference approach [18] where the variational distribution is assumed to be a Gaussian. Variational Gaussian approaches can be slow because of the requirement to estimate the covariance matrix.…”
Section: Variational Inferencementioning
confidence: 99%
“…The motivation for this q(Z) is that it is possible to compute the derivatives of L(q) with respect to the Gaussian parameters. Following [14] we compute the derivatives using Monte Carlo expectations 3 to address the complex relations in our model (cf. Eq.…”
Section: Model Inferencementioning
confidence: 99%