2020
DOI: 10.48550/arxiv.2010.10177
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Sparse Gaussian Process Variational Autoencoders

Abstract: Large, multi-dimensional spatio-temporal datasets are omnipresent in modern science and engineering. An effective framework for handling such data are Gaussian process deep generative models (GP-DGMs), which employ GP priors over the latent variables of DGMs. Existing approaches for performing inference in GP-DGMs do not support sparse GP approximations based on inducing points, which are essential for the computational efficiency of GPs, nor do they handle missing data -a natural occurrence in many spatio-tem… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
26
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
2
2

Relationship

3
4

Authors

Journals

citations
Cited by 13 publications
(27 citation statements)
references
References 4 publications
1
26
0
Order By: Relevance
“…Tables 1-3 showcase the comparative results of different models on the three multi-task datasets, respectively. It is notable that the results of GP and GPAR-NL in Tables 1 and 2 are taken from [28], and the results of GP-VAE are taken from [39]. We have the following findings from the comparative results.…”
Section: Resultsmentioning
confidence: 90%
See 3 more Smart Citations
“…Tables 1-3 showcase the comparative results of different models on the three multi-task datasets, respectively. It is notable that the results of GP and GPAR-NL in Tables 1 and 2 are taken from [28], and the results of GP-VAE are taken from [39]. We have the following findings from the comparative results.…”
Section: Resultsmentioning
confidence: 90%
“…This comparative study introduces state-of-the-art LMCs as well as other MTGP competitors, including (i) the Gaussian processes autoregressive regression with nonlinear correlations (GPAR-NL) [28], which has been verified to be superior in comparison to previous MTGPs, for example, CoKriging [56], intrinstic coregionalisation model (ICM) [57], semiparametric latent factor model (SLFM) [12], collaborative multi-output GP (CGP) [26], convolved multi-output GP (CMOGP) [37], and GPRN [22]; (ii) the multi-output GPs with neural likelihoods [31], including NMOGP, NGPRN and SVLMC-DKL; and (iii) the sparse GP variational autoencoders (GP-VAE) [39] with partial inference networks [58]. Besides, the baselines GP and stochastic variational GP (SVGP) [36] are involved in the comparison.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…In contrast, we show the efficacy of our approach to dynamic sequential data with dense changes of the factors in time. This general class of models has been further extended by the Sparse GP-VAE (SGP-VAE) [Ashman et al, 2020], the Scalable GP-VAE (SVGP-VAE) [Jazbec et al, 2020], and the Factorized GP-VAE (FGP-VAE) [Jazbec et al, 2021], which all improve their scalability to larger data sets. These extensions could also be readily applied to our model, which however we have not done in this study, since exact inference was still feasible in our experiments.…”
Section: Related Workmentioning
confidence: 99%