Proceedings of the 17th International Conference on Predictive Models and Data Analytics in Software Engineering 2021
DOI: 10.1145/3475960.3475986
|View full text |Cite
|
Sign up to set email alerts
|

Comparative study of random search hyper-parameter tuning for software effort estimation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 12 publications
(5 citation statements)
references
References 41 publications
0
5
0
Order By: Relevance
“…GS is a simple and easy method [21]. This method is most often used by researchers to perform optimization and is proven to improve accuracy [22][23][24][25][26][27][28][29][30][31][32][33]. GS is a technique for searching the search space for all possible combinations of the hyperparameters given by the user.…”
Section: Hyperparameters Tuning Techniquementioning
confidence: 99%
“…GS is a simple and easy method [21]. This method is most often used by researchers to perform optimization and is proven to improve accuracy [22][23][24][25][26][27][28][29][30][31][32][33]. GS is a technique for searching the search space for all possible combinations of the hyperparameters given by the user.…”
Section: Hyperparameters Tuning Techniquementioning
confidence: 99%
“…These algorithms were chosen based on their accurate performance in the literatur for predicting annual building energy consumption, effectiveness in handling both cate gorical and numerical input features [17], and their robustness in non-linear relationship modelling [16,18]. The following section will provide a highlight of the key features and algorithms employed in the selected ML models.…”
Section: Model Selectionmentioning
confidence: 99%
“…In some cases, particularly when dealing with large and complex datasets, methods like random search can offer similar benefits to more sophisticated hyperparameter tuners while requiring fewer computational resources and being easier to implement. Empirical experiments have shown that a simple random search algorithm, sampling as few as 60 hyperparameter combinations, can perform as effectively as an exhaustive grid search spanning over 4000 hyperparameter values [18,30]. As a result, considering the complexity of the dataset, the randomized search method has been utilized in this study for the hyperparameter tuning process.…”
Section: Hyperparameter Tuningmentioning
confidence: 99%
“…Since they cannot be learnt from the data, they are usually denoted as hyperparameters and provided explicitly. There are various strategies for choosing the best possible values of these hyperparameters, like grid search, random selection, simulated annealing, and other methods [47][48][49].…”
Section: Hyperparameter Tuning Gridsmentioning
confidence: 99%