2022
DOI: 10.48550/arxiv.2203.01717
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Why Do Machine Learning Practitioners Still Use Manual Tuning? A Qualitative Study

Abstract: Current advanced hyperparameter optimization (HPO) methods, such as Bayesian optimization, have high sampling efficiency and facilitate replicability. Nonetheless, machine learning (ML) practitioners (e.g., engineers, scientists) mostly apply less advanced HPO methods, which can increase resource consumption during HPO or lead to underoptimized ML models. Therefore, we suspect that practitioners choose their HPO method to achieve different goals, such as decrease practitioner effort and target audience complia… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 15 publications
(28 reference statements)
0
1
0
Order By: Relevance
“…All the aforementioned techniques are dedicated to special cases; as an example, grid search is only reliable for low-dimensional input spaces [ 47 ]. On the other hand, it was shown that random search results in better sampling efficiency in high-dimensional search spaces compared to grid search [ 49 ]. Bayesian optimization might potentially trap the model at a local optimum.…”
Section: Resultsmentioning
confidence: 99%
“…All the aforementioned techniques are dedicated to special cases; as an example, grid search is only reliable for low-dimensional input spaces [ 47 ]. On the other hand, it was shown that random search results in better sampling efficiency in high-dimensional search spaces compared to grid search [ 49 ]. Bayesian optimization might potentially trap the model at a local optimum.…”
Section: Resultsmentioning
confidence: 99%