2020
DOI: 10.1007/978-3-030-51965-0_43
|View full text |Cite
|
Sign up to set email alerts
|

Search-Based Wrapper Feature Selection Methods in Software Defect Prediction: An Empirical Analysis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 17 publications
(11 citation statements)
references
References 29 publications
0
11
0
Order By: Relevance
“…Alqushaibi et al [41] propose a new weight optimization method based on the sine cosine algorithm (SCA). Balogun et al [42] evaluate a number of different methods on a real-world dataset of software defects and show that they can significantly improve the performance of defect prediction models.…”
Section: Related Workmentioning
confidence: 99%
“…Alqushaibi et al [41] propose a new weight optimization method based on the sine cosine algorithm (SCA). Balogun et al [42] evaluate a number of different methods on a real-world dataset of software defects and show that they can significantly improve the performance of defect prediction models.…”
Section: Related Workmentioning
confidence: 99%
“…The 10-fold CV option is based on its ability to create malware detection models with the low impact of the issue of class imbalance [20,[50][51][52][53]. Moreover, the K-fold CV approach ensures that each instance can be used iteratively for both training and testing [54][55][56]. The Waikato Environment for Knowledge Analysis (WEKA) machine learning library [57] and R programming language [58] are used for the experimentation on an Intel(R) Core™ machine equipped with i7-6700, running at speed 3.4 GHz CPU with 16 GB RAM.…”
Section: 4mentioning
confidence: 99%
“…Proposed by Balogun Balogun et al (2020), 13 searchbased WFS methods were performed on 7 datasets for SDP with NB classification algorithm. Accuracy and AUC two evaluation metrics that were used to evaluate the performance of the SDP models.…”
Section: Feature Selectionmentioning
confidence: 99%