2020
DOI: 10.1287/opre.2019.1966
|View full text |Cite
|
Sign up to set email alerts
|

Parallel Bayesian Global Optimization of Expensive Functions

Abstract: Large-Scale Parallel Bayesian Optimization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
45
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 64 publications
(46 citation statements)
references
References 36 publications
1
45
0
Order By: Relevance
“…A common operation in most algorithms is Cholesky decomposition which is used to invert the kernel matrix and is generally O(n 3 ) for n data points, but with care this can be calculated incrementally as new points arrive, reducing the complexity to O(n 2 ) [123]. Several algorithms gain speed-up by implementing part of the algorithm on a GPU, which can be up to 100 times faster than the equivalent single-threaded code [124].…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…A common operation in most algorithms is Cholesky decomposition which is used to invert the kernel matrix and is generally O(n 3 ) for n data points, but with care this can be calculated incrementally as new points arrive, reducing the complexity to O(n 2 ) [123]. Several algorithms gain speed-up by implementing part of the algorithm on a GPU, which can be up to 100 times faster than the equivalent single-threaded code [124].…”
Section: Discussionmentioning
confidence: 99%
“…• MOE (https://github.com/Yelp/MOE) supports parallel optimisation via multi-point stochastic gradient ascent [124]. Interfaces are provided for Python and C++, and optimisation can be accelerated on GPU hardware.…”
Section: Discussionmentioning
confidence: 99%
“…SigOpt is an AutoML solution that uses a Bayesian method to construct a feedback mechanism between model output and different values for hyperparameters. Thus, the model can be tuned by selecting the best network parameters to maiximise performance [48].…”
Section: Proposed Methodologymentioning
confidence: 99%
“…In the deep learning field, a number of strategies have been proposed to effectively choose hyper-parameters, including random search (Bergstra and Bengio, 2012), evolutionary strategy such as the NEAT and HyperNEAT algorithms (Stanley and Miikkulainen, 2002;Miikkulainen et al, 2017) and Bayesian optimization (Dewancker et al, 2016b,a;Wang et al, 2016). Nonetheless, manual search is still a commonly used technique, because it is easy to implement and exploits researchers' experience to reduce the number of trials, which is useful when training large networks that require extensive computational resources.…”
Section: Related Workmentioning
confidence: 99%