2018
DOI: 10.1007/s13369-018-3564-9
|View full text |Cite
|
Sign up to set email alerts
|

Comparing Hyperparameter Optimization in Cross- and Within-Project Defect Prediction: A Case Study

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

1
3
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 34 publications
1
3
0
Order By: Relevance
“…In this section, we compare our results with the previous studies according to the optimization methods, classifiers and hyper-parameters used for optimization and what the results were and are thus able to discover if our study is an improvement on the existing ones or not. As in papers [2] and [12], the 30 classification algorithm parameters were tuned, in paper [3] and [34] only 2 classification algorithms parameters were tuned and in paper [13] SVM-RBF parameters were optimized using AIS (clonal selection) and so its results are also comparable with our results due to classifiers and optimization methods used. Hence the results gratifyingly showed that our technique was indeed an improvement on the existing ones.…”
Section: Comparison With Previous Techniquessupporting
confidence: 68%
See 1 more Smart Citation
“…In this section, we compare our results with the previous studies according to the optimization methods, classifiers and hyper-parameters used for optimization and what the results were and are thus able to discover if our study is an improvement on the existing ones or not. As in papers [2] and [12], the 30 classification algorithm parameters were tuned, in paper [3] and [34] only 2 classification algorithms parameters were tuned and in paper [13] SVM-RBF parameters were optimized using AIS (clonal selection) and so its results are also comparable with our results due to classifiers and optimization methods used. Hence the results gratifyingly showed that our technique was indeed an improvement on the existing ones.…”
Section: Comparison With Previous Techniquessupporting
confidence: 68%
“…They reported that for the German dataset the three objectives were improved and indicating that if fairness and performance of multi-objective optimization are understood, then it is possible to achieve one without affecting the other one. öztürk et al [34] investigated the effect of hyperparameter optimization in cross-project defect prediction (CPDP) and within-project defect prediction (WPDP). For this they have used RF and SVM using a grid search for searching and Gradient Boosting machine for Boosting and 20 datasets from Softlab, NASA MDP, and open-source are used.…”
Section: Related Workmentioning
confidence: 99%
“…One such method is hyperparameter optimisation for an enhanced convolutional neural network (CNN) model for WPDP [4]. Another study investigated the effects of optimising WPDP hyperparameters [5]. One effective approach for reliably predicting software system bugs is the use of logistic regression and ensemble-bagged tree-based prediction models [6].…”
Section: Introductionmentioning
confidence: 99%
“…Normally, classifiers can be trained based on the dataset collected previously. The existing ML techniques have been facilitated many SDP approaches [30][31][32][33]. The classifier with the highest performance index is selected by evaluating the performance in terms of balance which reflects class imbalance.…”
mentioning
confidence: 99%