2018
DOI: 10.48550/arxiv.1810.10368
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Scalable Gaussian Processes on Discrete Domains

Vincent Fortuin,
Gideon Dresdner,
Heiko Strathmann
et al.

Abstract: Kernel methods on discrete domains have shown great promise for many challenging data types, for instance, biological sequence data and molecular structure data. Scalable kernel methods like Support Vector Machines may offer good predictive performances but do not intrinsically provide uncertainty estimates. In contrast, probabilistic kernel methods like Gaussian Processes offer uncertainty estimates in addition to good predictive performance but fall short in terms of scalability. We present the first sparse … Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2019
2019
2020
2020

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 15 publications
(21 reference statements)
0
2
0
Order By: Relevance
“…Optimization of inducing points. When working with sparse Gaussian processes, the selection of inducing point locations can often be crucial for the quality of the approximation (Titsias, 2009;Fortuin et al, 2018;Burt et al, 2019). In our model, we can optimize these inducing point locations jointly with the other components.…”
Section: Synthetic Moving Ball Datamentioning
confidence: 99%
“…Optimization of inducing points. When working with sparse Gaussian processes, the selection of inducing point locations can often be crucial for the quality of the approximation (Titsias, 2009;Fortuin et al, 2018;Burt et al, 2019). In our model, we can optimize these inducing point locations jointly with the other components.…”
Section: Synthetic Moving Ball Datamentioning
confidence: 99%
“…One of the outstanding features of Gaussian Process (GP) prediction, in particular, is its usability to design Bayesian Optimization (BO) algorithms (Moćkus et al, 1978;Jones et al, 1998;Frazier, 2018) and further sequential design strategies (Risk and Ludkovski, 2018;Binois et al, 2019;Bect et al, 2019). While in most usual BO and related contributions the focus is on continuous problems with vector-valued inputs, there has been a growing interest recently for GP-related modelling and BO in the presence of discrete and mixed discrete-continuous inputs (Kondor and Lafferty, 2002;Gramacy and Taddy, 2010;Fortuin et al, 2018;Roustant et al, 2018;Garrido-Merchan and Hernández-Lobato, 2018;Ru et al, 2019;Griffiths and Hernández-Lobato, 2019). Here we focus specifically on kernels dedicated to finite set-valued inputs and their application to GP modelling and BO, notably (but not only) in combinatorial optimization.…”
Section: Introductionmentioning
confidence: 99%