2019
DOI: 10.1002/smr.2234
|View full text |Cite
|
Sign up to set email alerts
|

Do different cross‐project defect prediction methods identify the same defective modules?

Abstract: Cross‐project defect prediction (CPDP) is needed when the target projects are new projects or the projects have less training data, since these projects do not have sufficient historical data to build high‐quality prediction models. The researchers have proposed many CPDP methods, and previous studies have conducted extensive comparisons on the performance of different CPDP methods. However, to the best of our knowledge, it remains unclear whether different CPDP methods can identify the same defective modules,… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
7

Relationship

3
4

Authors

Journals

citations
Cited by 13 publications
(9 citation statements)
references
References 95 publications
(157 reference statements)
0
9
0
Order By: Relevance
“…We further use Scott-Knott test [23] to rank our proposed method ALTRA and all the seven baselines. Scott-Knott test was recommended by Ghotra et al [16] and then widely used in previous empirical studies for software defect prediction [8]- [11], [50]. This method performs the grouping process in a recursive way.…”
Section: Results Analysis a Results Analysis For Rq1 1) Rq1: Can Oumentioning
confidence: 99%
See 1 more Smart Citation
“…We further use Scott-Knott test [23] to rank our proposed method ALTRA and all the seven baselines. Scott-Knott test was recommended by Ghotra et al [16] and then widely used in previous empirical studies for software defect prediction [8]- [11], [50]. This method performs the grouping process in a recursive way.…”
Section: Results Analysis a Results Analysis For Rq1 1) Rq1: Can Oumentioning
confidence: 99%
“…To better rank our proposed method ALTRA and all the CPDP baselines in terms of a specific performance indicator, we use Scott-Knott test [23]. Since recent studies [8]- [11], [50] have suggested that using the Scott-Knott test is superior to some post hoc tests (e.g., Friedman-Nemenyi test).…”
Section: Threats To Conclusion Validitymentioning
confidence: 99%
“…Is it useful for just starting phase of the new project or it can also work when the project is old? Chen et al [13] carried out a comparative study between eight supervised and four unsupervised defect prediction models. The main idea was to check if different models predict the same defective modules.…”
Section: Previous Work 21 Just-in-time Software Defect Predictionmentioning
confidence: 99%
“…To show whether there exists a statistical difference between our proposed IETCR strategy with baselines in terms of the EXAM score metric, we use the Wilcoxon signed-rank test. Since this kind of statistical test method has been widely used in previous studies [21], [49]- [52].…”
Section: Threats To Validitymentioning
confidence: 99%