2019
DOI: 10.48550/arxiv.1904.02303
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Robust Deep Gaussian Processes

Abstract: This report provides an in-depth overview over the implications and novelty Generalized Variational Inference (GVI) [22] brings to Deep Gaussian Processes (DGPS) [9] 1 . Specifically, robustness to model misspecification as well as principled alternatives for uncertainty quantification are motivated with an information-geometric view. These modifications have clear interpretations and can be implemented in less than 100 lines of Python code. Most importantly, the corresponding empirical results show that DGPS … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
3

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 12 publications
(25 reference statements)
0
4
0
Order By: Relevance
“…Lastly, we present numerical experiments and their results (Section 6.2.4). These findings are also summarized with a higher level of detail in a separate technical report (Knoblauch, 2019b).…”
Section: Deep Gaussian Processesmentioning
confidence: 97%
“…Lastly, we present numerical experiments and their results (Section 6.2.4). These findings are also summarized with a higher level of detail in a separate technical report (Knoblauch, 2019b).…”
Section: Deep Gaussian Processesmentioning
confidence: 97%
“…Another objective based on ELBO, referred to as γrobust [19], aims at providing robustness to the training of GPs with respect to model misspecification and uncertainty control, where the log-likelihood term in ELBO is replaced with γ-divergence: where γ is a hyperparameter and is set to be equal to 1.03 for the rest of the paper.…”
Section: B Scalable Variational Gaussian Processesmentioning
confidence: 99%
“…In particular, inducing point methods [18] allow for scalable approximations to the exact GPs by introducing the learnable latent points variables that sparsify the GP model. Several scalable approximations of the marginal log-likelihood objectives were proposed recently [19], [16] that rely on bounding the model evidence and allowing to substitute the expensive-to-evaluate exact marginal log-likelihood objective.…”
Section: Introductionmentioning
confidence: 99%
“…2.4; as well as two 2-layer models: (DGP) a variational DGP as described in Sec. 2.3; and (γ-DGP) the robust DGP described in Knoblauch (2019); .…”
Section: Multivariate Regressionmentioning
confidence: 99%