2019 International Joint Conference on Neural Networks (IJCNN) 2019
DOI: 10.1109/ijcnn.2019.8851978
|View full text |Cite
|
Sign up to set email alerts
|

On the Performance of Differential Evolution for Hyperparameter Tuning

Abstract: Automated hyperparameter tuning aspires to facilitate the application of machine learning for non-experts. In the literature, different optimization approaches are applied for that purpose. This paper investigates the performance of Differential Evolution for tuning hyperparameters of supervised learning algorithms for classification tasks. This empirical study involves a range of different machine learning algorithms and datasets with various characteristics to compare the performance of Differential Evolutio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 20 publications
(11 citation statements)
references
References 9 publications
0
11
0
Order By: Relevance
“…In recent years, the EA has achieved state-of-the-art performances in optimizing deep neural networks [10]- [13], [15]- [20]. It has also proved its superiority over Bayesian optimization [20], [40], [41], which is not applicable to solving a sequential decision problem such as the architecture of neural networks [38]. Moreover, the performance of Bayesian optimization is poor when using a high number of hyperparameters [13].…”
Section: A Optimization Of Hyperparameters and Architecturementioning
confidence: 99%
“…In recent years, the EA has achieved state-of-the-art performances in optimizing deep neural networks [10]- [13], [15]- [20]. It has also proved its superiority over Bayesian optimization [20], [40], [41], which is not applicable to solving a sequential decision problem such as the architecture of neural networks [38]. Moreover, the performance of Bayesian optimization is poor when using a high number of hyperparameters [13].…”
Section: A Optimization Of Hyperparameters and Architecturementioning
confidence: 99%
“…In addition to the internal parameters adjusted in the training step, ML methods are generally sensitive to hyperparameter definitions [26]. We propose a hybrid approach coupling the ML methods with hyperparameters tuning through a Randomized Search (RS) strategy in this work.…”
Section: Randomized Search (Rs) Strategymentioning
confidence: 99%
“…This work leverages the traces of experiments in [3], which executed hyperparameter tuning for six base learners by an evolutionary strategy. Running different algorithm selection policies on the recorded experiment traces allows evaluating different bandit policies based on a common ground truth.…”
Section: B Experimental Validationmentioning
confidence: 99%
“…That enables a fair comparison among the bandits: more expensive bandit strategies have to offset their higher computational costs by selecting better actions. 1) Computational Resources and Setup: [3] executed each tuner (and base learner) in a single docker container with only a single CPU core accessible. Parallel execution of different experiments was limited to ensure that a full CPU core was available for each docker container.…”
Section: B Experimental Validationmentioning
confidence: 99%
See 1 more Smart Citation