2015
DOI: 10.5121/ijsea.2015.6302
|View full text |Cite
|
Sign up to set email alerts
|

Benchmarking Machine Learning Techniques for Software Defect Detection

Abstract: Machine Learning approaches are good in solving problems that have less information. In most cases, the software domain problems characterize as a process of learning that depend on the various circumstances and changes accordingly. A predictive model is constructed by using machine learning approaches and classified them into defective and non-defective modules. Machine learning techniques help developers to retrieve useful information after the classification and enable them to analyse data from different pe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
33
0
1

Year Published

2018
2018
2022
2022

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 45 publications
(34 citation statements)
references
References 29 publications
0
33
0
1
Order By: Relevance
“…The researchers achieve the fair and rigorous comparison of learners by using significance test [39] because of its ability to distinguish significant observation from chance observations. According to [39], Wilcoxon signed rank test is recommended as a non-parametric test to be utilized for comparing between two learners over multiple datasets.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…The researchers achieve the fair and rigorous comparison of learners by using significance test [39] because of its ability to distinguish significant observation from chance observations. According to [39], Wilcoxon signed rank test is recommended as a non-parametric test to be utilized for comparing between two learners over multiple datasets.…”
Section: Resultsmentioning
confidence: 99%
“…According to [39], Wilcoxon signed rank test is recommended as a non-parametric test to be utilized for comparing between two learners over multiple datasets. Otherwise, in case of comparing the multiple learners over multiple datasets, the fireman test is recommended by Post-hoc Nemenyi test.…”
Section: Resultsmentioning
confidence: 99%
“…Aleem et al [23] "used 15 NASA datasets from the PROMISE repository to compare the performance of 11 machine learning methods". The study included NB, multi-layer perception (MLP), support vector machines (SVMs), AdaBoost, bagging, DS, RF, J48, KNN, RBF, and k-means.…”
Section: Related Researchmentioning
confidence: 99%
“…Aleem et al [27] compared the performance of 11 machine learning methods and used 15 NASA datasets from the PROMISE repository. NB, MLP, SVM, AdaBoost, bagging, DS, RF, J48, KNN, RBF, and k-means were applied in their study.…”
Section: Related Workmentioning
confidence: 99%
“…In addition, Bowes [22] suggested the use of classifier ensembles to effectively predict defects. A number of works have been accomplished in the field of SDP utilizing ensemble methods such as bagging [23] [24] [25], voting [22] [26], boosting [23] [24] [25], random tree [22], RF [27] [28], and stacking [22]. Neural networks (NN) can be used to predict defect prone software modules [29] [30] [31] [32].…”
Section: Introductionmentioning
confidence: 99%