Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery &Amp; Data Mining 2021
DOI: 10.1145/3447548.3467098
|View full text |Cite
|
Sign up to set email alerts
|

Amazon SageMaker Automatic Model Tuning: Scalable Gradient-Free Optimization

Abstract: Tuning complex machine learning systems is challenging. Machine learning models typically expose a set of hyperparameters, be it regularization, architecture, or optimization parameters, whose careful tuning is critical to achieve good performance. To democratize access to such systems, it is essential to automate this tuning process. This paper presents Amazon SageMaker Automatic Model Tuning (AMT), a fully managed system for black-box optimization at scale. AMT finds the best version of a machine learning mo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 14 publications
(15 citation statements)
references
References 27 publications
(31 reference statements)
0
10
0
Order By: Relevance
“…This procedure is beneficial if it is repeated several times while narrowing the space of hyperparameters every time. As hyperparameter optimization is an optimization problem, algorithms, such as Bayesian optimization have been designed for this task and are frequently applied for deep ML models and kernel methods. Because of the tedious procedure of manually tuning hyperparameters or the expert knowledge, which is required in most cases to find optimal hyperparameters, ML models have been designed which automatically learn optimal hyperparameters and only require little human intervention, see, e.g., refs ,, , , − .…”
Section: Modelsmentioning
confidence: 99%
“…This procedure is beneficial if it is repeated several times while narrowing the space of hyperparameters every time. As hyperparameter optimization is an optimization problem, algorithms, such as Bayesian optimization have been designed for this task and are frequently applied for deep ML models and kernel methods. Because of the tedious procedure of manually tuning hyperparameters or the expert knowledge, which is required in most cases to find optimal hyperparameters, ML models have been designed which automatically learn optimal hyperparameters and only require little human intervention, see, e.g., refs ,, , , − .…”
Section: Modelsmentioning
confidence: 99%
“…Amazon SageMaker [150] is an AutoML tool built on Amazon Web Services (AWS). It involves automated model tuning as a major module.…”
Section: Automl Toolsmentioning
confidence: 99%
“…The next set of approaches involved are those which inherently have a notion of sequential decision making. Bayesian Optimization (BO, Mockus (1974), Jones et al (1998), Brochu et al (2010)) is one of the most popular approaches to date, used for industrial applications (Golovin et al, 2017;Balandat et al, 2020;Perrone et al, 2021) and a variety of scientific experimentation (Frazier & Wang, 2015;Hernández-Lobato et al, 2017;Li et al, 2018;Griffiths & Hernández-Lobato, 2020;Tran et al, 2021;van Bueren et al, 2021). For RL applications, one of the most prominent uses of BO was for tuning AlphaGo's hyperparameters, which include its core Monte Carlo Tree Search (MCTS) (Browne et al, 2012) hyperparameters and time-control settings.…”
Section: Bayesian Optimizationmentioning
confidence: 99%