2017 IEEE International Conference on Software Maintenance and Evolution (ICSME) 2017
DOI: 10.1109/icsme.2017.51
|View full text |Cite
|
Sign up to set email alerts
|

Supervised vs Unsupervised Models: A Holistic Look at Effort-Aware Just-in-Time Defect Prediction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
67
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 106 publications
(69 citation statements)
references
References 53 publications
2
67
0
Order By: Relevance
“…They say a "good" defect predictor selects the 20% of files containing 80% of the defects In the literature, this 20/80 rule is often called P opt 20 (the percent of the bugs found after reading 20%). P opt 20 is widely used in the literature and, for details on that measure, we refer the reader to those publications [18], [42], [48], [62], [64], [69], [69], [111]. For this paper, all we need to say about P opt 20 is the conclusions reached from this metric are nearly the same as the conclusions reached via G-score.…”
Section: Evaluation Criteriamentioning
confidence: 98%
See 1 more Smart Citation
“…They say a "good" defect predictor selects the 20% of files containing 80% of the defects In the literature, this 20/80 rule is often called P opt 20 (the percent of the bugs found after reading 20%). P opt 20 is widely used in the literature and, for details on that measure, we refer the reader to those publications [18], [42], [48], [62], [64], [69], [69], [111]. For this paper, all we need to say about P opt 20 is the conclusions reached from this metric are nearly the same as the conclusions reached via G-score.…”
Section: Evaluation Criteriamentioning
confidence: 98%
“…This value is the harmonic mean between recall and falsealarm of risky software commit prediction power. There are other evaluation scores that could be applied to this kind of analysis [42] and, in the future, it would be useful to test in the central claim of this paper holds for more than just G-scores and P opt 20.…”
Section: Evaluation Biasmentioning
confidence: 99%
“…As this paper was going to press, we learned of new papers that updated the Yang [17]. We thank these authors for the courtesy of sharing a pre-print of those new results.…”
Section: Addendummentioning
confidence: 99%
“…Then, they attempted to combine complexity metrics with more metrics such as code churn metrics and token frequency metrics [26,31,43,47,48,52,54,54,57,58,58,65,79,81]. Then, advances have been made to use unsupervised machine learning to predict bugs [25,32,36,46,75,76,77,78,80] using the similar set of complexity metrics. These approaches use the similar metrics as those in bug prediction, but do not capture the difference between vulnerable code and buggy code, which hinders the effectiveness.…”
Section: Related Workmentioning
confidence: 99%