2020
DOI: 10.1109/tr.2019.2895462
|View full text |Cite
|
Sign up to set email alerts
|

A Novel Class-Imbalance Learning Approach for Both Within-Project and Cross-Project Defect Prediction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
22
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
4

Relationship

1
9

Authors

Journals

citations
Cited by 58 publications
(22 citation statements)
references
References 38 publications
0
22
0
Order By: Relevance
“…In other words, the class overlap issue has a performance impact on prediction; it also shows that when removing the class overlap instance, strategies such as oversampling should be considered to solve the class imbalance problem. This strategy can be used in other application domains, such as software defect predictions, to solve the class imbalance problem [33]. Also, for cross-project software defect prediction, the class imbalance problem can be solved using this SNCR strategy [34] [35].…”
Section: Discussionmentioning
confidence: 99%
“…In other words, the class overlap issue has a performance impact on prediction; it also shows that when removing the class overlap instance, strategies such as oversampling should be considered to solve the class imbalance problem. This strategy can be used in other application domains, such as software defect predictions, to solve the class imbalance problem [33]. Also, for cross-project software defect prediction, the class imbalance problem can be solved using this SNCR strategy [34] [35].…”
Section: Discussionmentioning
confidence: 99%
“…The sampling method adds or removes data instances to achieve a balanced training set. It might result in the loss of much important information, which can lead to unsatisfactory results [79, 80]. After analysing our empirical evidence, we found SMOTE produces optimal results better compared to these sampling approaches.…”
Section: Empirical Study and The Proposed Approachmentioning
confidence: 93%
“…Besides homogeneous defect prediction, heterogeneous defect prediction has recently become a great process. Gong et al [47] utilized the thought of stratification embedded in the nearest neighbor to produce evolving training datasets with balanced data. Zou et al [48] proposed a method named Joint Feature representation with double marginalized denoising auto-encoders to learn the global and local features, and they introduced local data gravitation between source and target domains to determine instance weight in the learning process (Zou et al [49]).…”
Section: Related Workmentioning
confidence: 99%