2020
DOI: 10.1609/aaai.v34i04.5910
|View full text |Cite
|
Sign up to set email alerts
|

Efficient Automatic CASH via Rising Bandits

Abstract: The Combined Algorithm Selection and Hyperparameter optimization (CASH) is one of the most fundamental problems in Automatic Machine Learning (AutoML). The existing Bayesian optimization (BO) based solutions turn the CASH problem into a Hyperparameter Optimization (HPO) problem by combining the hyperparameters of all machine learning (ML) algorithms, and use BO methods to solve it. As a result, these methods suffer from the low-efficiency problem due to the huge hyperparameter space in CASH. To alleviate this … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
34
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5
3

Relationship

3
5

Authors

Journals

citations
Cited by 22 publications
(35 citation statements)
references
References 20 publications
0
34
0
Order By: Relevance
“…Meanwhile, a class of methods in the HPO field treats the selection of algorithms as a new hyper-parameter to optimize. They optimize over the joint search space with probabilistic models [25,28,43,48,51], which could be another research direction. Tuning Budget Allocation.…”
Section: Research Opportunitiesmentioning
confidence: 99%
“…Meanwhile, a class of methods in the HPO field treats the selection of algorithms as a new hyper-parameter to optimize. They optimize over the joint search space with probabilistic models [25,28,43,48,51], which could be another research direction. Tuning Budget Allocation.…”
Section: Research Opportunitiesmentioning
confidence: 99%
“…Automating Individual Components. Apart from end-to-end Au-toML, many efforts have been devoted to studying sub-problems in AutoML: (1) feature engineering [33][34][35][36]56], (2) algorithm selection [12,15,38,45,50,68], and (3) hyper-parameter tuning [4,14,23,25,29,31,37,43,47,58,63,65,66,76]. Meta-learning methods [16,19,74] for hyper-parameter tuning can leverage auxiliary knowledge acquired from previous tasks to achieve faster optimization.…”
Section: Related Workmentioning
confidence: 99%
“…Given the inherent uncertainty in our estimation method, rather than returning a single point estimate, we instead return a lower bound 𝑙 and an upper bound 𝑢. We refer readers to [45] for the details of how the lower and upper bounds are established. Moreover, one can query a building block about its expected utility improvement (EUI) via…”
Section: Interfacesmentioning
confidence: 99%
See 1 more Smart Citation
“…Traditional BBO with a single objective has many applications: 1) automatic A/B testing, 2) experimental design [15], 3) knobs tuning in database [45,47], and 4) automatic hyper-parameter tuning [6,27,32,43], one of the most indispensable components in AutoML systems [1] such as Microsoft's Azure Machine Learning, Google's Cloud Machine Learning, Amazon Machine Learning [34], and IBM's Watson Studio AutoAI, where the task is to minimize the validation error of a machine learning algorithm as a function of its hyper-parameters. Recently, generalized BBO emerges and has been applied to many areas such as 1) processor architecture and circuit design [2], 2) resource allocation [18], and 3) automatic chemical design [22], which requires more general functionalities that may not be supported by traditional BBO, such as multiple objectives and constraints.…”
mentioning
confidence: 99%