2019
DOI: 10.3390/electronics8111267
|View full text |Cite
|
Sign up to set email alerts
|

An Approach to Hyperparameter Optimization for the Objective Function in Machine Learning

Abstract: In machine learning, performance is of great value. However, each learning process requires much time and effort in setting each parameter. The critical problem in machine learning is determining the hyperparameters, such as the learning rate, mini-batch size, and regularization coefficient. In particular, we focus on the learning rate, which is directly related to learning efficiency and performance. Bayesian optimization using a Gaussian Process is common for this purpose. In this paper, based on Bayesian op… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
14
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 17 publications
(14 citation statements)
references
References 14 publications
0
14
0
Order By: Relevance
“…For this purpose, a BO-based hyper-parameter optimization will be applied. It uses prior information to obtain the parameter distribution, and it is widely used for machine learning models [33][34][35].…”
Section: Psomentioning
confidence: 99%
See 1 more Smart Citation
“…For this purpose, a BO-based hyper-parameter optimization will be applied. It uses prior information to obtain the parameter distribution, and it is widely used for machine learning models [33][34][35].…”
Section: Psomentioning
confidence: 99%
“…In order to balance the exploration and the exploitation in the selection of the next point to examine, acquisition function is used (Probability of improvement (PI) [42], Expected improvement (EI) [43], GP upper confidence bound (GP-UCB) [44]) . For a detailed analysis of BO-based hyper-parameter optimization, readers are referred to [33][34][35]. According to the results provided in [34], EI and GP-UCB functions are faster than PI, and EI is less complicated than GP-UCB since there are no hyper-parameters that need to be tuned in EI as in GP-UCB.…”
Section: Hyper-parameter Optimizationmentioning
confidence: 99%
“…The hyperparameter tuning of machine learning models could be regarded as an optimization process of black box functions [29]. For computational reasons, the cost of optimizing this function was high, and more importantly, the expression of the optimized function was unknown.…”
Section: Hyperparameter Optimizationmentioning
confidence: 99%
“…In the early days, attempts were made to translate human problem-solving logic into computer language. As artificial intelligence draws attention, machine learning and deep learning, which are core technologies, have also emerged as important keywords [7][8][9].…”
Section: Artificial Intelligencementioning
confidence: 99%