2020
DOI: 10.1016/j.jestch.2019.10.005
|View full text |Cite
|
Sign up to set email alerts
|

Multiple-classifiers in software quality engineering: Combining predictors to improve software fault prediction ability

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
28
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 38 publications
(28 citation statements)
references
References 29 publications
0
28
0
Order By: Relevance
“…The results obtained after classification confirmed the improvement in performance by combining sampling techniques with an ensemble classifier. In [68], ten ensemble classifiers were compared to baseline classifiers. The evaluated ensemble learning algorithms were adaBoostM1, LogicBoost, Multiboost AB, Bagging, RF, Dagging, Rotation forest (ROF), stacking, multi scheme, and voting.…”
Section: Rq1 : Which Ensemble Learning Techniques Are Applied For Software Defect Prediction?mentioning
confidence: 99%
See 3 more Smart Citations
“…The results obtained after classification confirmed the improvement in performance by combining sampling techniques with an ensemble classifier. In [68], ten ensemble classifiers were compared to baseline classifiers. The evaluated ensemble learning algorithms were adaBoostM1, LogicBoost, Multiboost AB, Bagging, RF, Dagging, Rotation forest (ROF), stacking, multi scheme, and voting.…”
Section: Rq1 : Which Ensemble Learning Techniques Are Applied For Software Defect Prediction?mentioning
confidence: 99%
“…[5] performed analysis of FS and three ensemble learning methods [6] combined FS and Data Balancing (DB) with ensemble techniques [7] proposed SmoteNDBoost and RusNDBoost [63] compared the bagging, boosting, and stacking ensembles. 11 base classifiers were used [64] RF was combined with feature selection and data sampling [65] analyze whether different classifiers identify the same defects or not using RF, NB, RPart, and SVM 2018 [8] analyzed ensembles of weighted randomized majority voting techniques [9] proposed SDAEsTSE model [12] analyzed performance model with and without applying SMOTE, AdaBoost, and Bagging [41] Proposed PBIL-Auto-Ens technique [42] proposed ensemble ROS, MWM, and FIDos methods with RF as base classifier [47] proposed multi-objective optimization for ensemble classification [52] proposed framework based on PCA in FS and RF, Adaboost, bagging, and classification via regression ensembles [55] proposed enhancement in the SMOTE-Ensemble approach using cost-sensitive learning (CSL) [66] proposed deep super learner (DSL) 2019 [67] compared Adaboost, Bagging, RSM, RF, and Vote ensembles [68] ten ensemble classifiers were compared to baseline classifiers 2020…”
Section: Rq1 : Which Ensemble Learning Techniques Are Applied For Software Defect Prediction?mentioning
confidence: 99%
See 2 more Smart Citations
“…e final results showed that, DP-CNN improves the state-of-the-art method by 12%. Yucalar et al [23], in their study, aimed at empirical demonstration of performance of fault prediction of 10 ensemble predictors with baseline predictor. e experiment was conducted on 15 open-source project datasets from PROMISE repository.…”
Section: Related Workmentioning
confidence: 99%