2020
DOI: 10.48550/arxiv.2001.11261
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

HAMLET -- A Learning Curve-Enabled Multi-Armed Bandit for Algorithm Selection

Abstract: Automated algorithm selection and hyperparameter tuning facilitates the application of machine learning. Traditional multi-armed bandit strategies look to the history of observed rewards to identify the most promising arms for optimizing expected total reward in the long run. When considering limited time budgets and computational resources, this backward view of rewards is inappropriate as the bandit should look into the future for anticipating the highest final reward at the end of a specified time budget. T… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 7 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?