2022
DOI: 10.48550/arxiv.2201.12909
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Scaling Gaussian Process Optimization by Evaluating a Few Unique Candidates Multiple Times

Abstract: Computing a Gaussian process (GP) posterior has a computational cost cubical in the number of historical points. A reformulation of the same GP posterior highlights that this complexity mainly depends on how many unique historical points are considered. This can have important implication in active learning settings, where the set of historical points is constructed sequentially by the learner. We show that sequential black-box optimization based on GPs (GP-Opt) can be made efficient by sticking to a candidate… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
12
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(12 citation statements)
references
References 6 publications
0
12
0
Order By: Relevance
“…Putting the two stages together, we have the above result. Thus, it remains to bound the number of batches H l within each phase l. Fortunately, inspired by [Cal22], we are able to show that H l can be upper bounded by the maximum information gain. We state this result in Lemma 5.6 and provide the proof in Appendix D.…”
Section: Resultsmentioning
confidence: 95%
See 4 more Smart Citations
“…Putting the two stages together, we have the above result. Thus, it remains to bound the number of batches H l within each phase l. Fortunately, inspired by [Cal22], we are able to show that H l can be upper bounded by the maximum information gain. We state this result in Lemma 5.6 and provide the proof in Appendix D.…”
Section: Resultsmentioning
confidence: 95%
“…As we already know, GP-UCB has a computation complexity of O(|D|T 3 ), because it requires computing the posterior mean and variance using O(T 2 ) and then finds the action that maximizes the UCB function per step. Recently, BBKB in [Cal20] improves the time complexity to (|D|T γ 2 T ), and later MINI-GP-Opt in [Cal22] further reduces computation complexity to O(T + |D|γ 3 T + γ 4 T ), which is currently the fastest no-regret algorithm. Although more feedback is needed to address the additional bias in our setting, our algorithm can still achieve an improvement with the highest order term being O(γ T T α ).…”
Section: Resultsmentioning
confidence: 99%
See 3 more Smart Citations