2020
DOI: 10.48550/arxiv.2002.09309
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Efficiently Sampling Functions from Gaussian Process Posteriors

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(9 citation statements)
references
References 0 publications
0
9
0
Order By: Relevance
“…This distinction mirrors the one in Machine Learning between aleatoric uncertaintydue to the stochastic variability inherent in querying 𝑓(𝑥) and epistemic uncertaintydue to the lack of knowledge about the actual structure of 𝑓(𝑥)which can be reduced by collecting more information. The same point is argued in (Gershman, 2019) which associates random exploration to Thompson Sampling, which consists in drawing a sample of 𝑓(𝑥) from the GP model and then make the next decision according to the optimization of that sample (Wilson et al, 2020b).…”
Section: Related Workmentioning
confidence: 90%
See 1 more Smart Citation
“…This distinction mirrors the one in Machine Learning between aleatoric uncertaintydue to the stochastic variability inherent in querying 𝑓(𝑥) and epistemic uncertaintydue to the lack of knowledge about the actual structure of 𝑓(𝑥)which can be reduced by collecting more information. The same point is argued in (Gershman, 2019) which associates random exploration to Thompson Sampling, which consists in drawing a sample of 𝑓(𝑥) from the GP model and then make the next decision according to the optimization of that sample (Wilson et al, 2020b).…”
Section: Related Workmentioning
confidence: 90%
“…An analysis on TS has been recently proposed in (Russo & Van Roy 2016), concluding that TS is biased towards exploitation and suggesting that an 𝜀-greedy version of TS can lead to a better performance (i.e., randomly selecting 𝑥 (𝑛+1) within the search space, with probability 𝜀, or performing TS with probability 1 − 𝜀). An efficient sampling procedure has been recently proposed in (Hahn et al, 2019) (Wilson et al, 2020b). Sampling from GP posterior is at the basis of information-based acquisition functions, described in the following section.…”
Section: Information-based Acquisition Functionsmentioning
confidence: 99%
“…In Theorem 1, we first establish an upper bound on the regret of S-GP-TS for any approximate model that satisfies some conditions on the quality of their posterior approximations (Assumptions 1 and 2). Then, focusing on SVGP models, we provide bounds for the number m of inducing variables required to guarantee a low regret when the decomposed sampling rule of Wilson et al (2020) is used. The bounds on m are characterized by the spectrum of the kernel of the GP model.…”
Section: Contributionsmentioning
confidence: 99%
“…The inducing variables are manifested either as inducing points or inducing features (sometimes referred to as inducing inter-domain variables, Burt et al, 2019;van der Wilk et al, 2020). Furthermore, Wilson et al (2020) introduced an efficient sampling rule (referred to as decomposed sampling) which decomposes a sample from the posterior into the sum of a prior with M features (see Sec. 3.2) and an SVGP update, reducing the computational cost of drawing a sample to O ((m + M )N ).…”
Section: Introductionmentioning
confidence: 99%
“…Therefore, L , ∼ N (0, I) can be used to draw samples from from N (0, K) and L −1 b can be used to "whiten" the vector b. However, the Cholesky factor requires O(N 3 ) computation and O(N 2 ) memory for an N × N covariance matrix K. To avoid this large complexity, randomized algorithms [49,53], low-rank/sparse approximations [34,51,71], or alternative distributions [68] are often used to approximate the sampling and whitening operations.…”
Section: Introductionmentioning
confidence: 99%