2017 ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM) 2017
DOI: 10.1109/esem.2017.8
|View full text |Cite
|
Sign up to set email alerts
|

Code Churn: A Neglected Metric in Effort-Aware Just-in-Time Defect Prediction

Abstract: Just-In-Time Software Defect Prediction (O-JIT-SDP) uses an online model to predict whether a new software change will introduce a bug or not. However, existing studies neglect the interaction of Software Quality Assurance (SQA) staff with the model, which may miss the opportunity to improve the prediction accuracy through the feedback from SQA staff. To tackle this problem, we propose Human-In-The-Loop (HITL) O-JIT-SDP that integrates feedback from SQA staff to enhance the prediction process. Furthermore, we … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
50
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 51 publications
(50 citation statements)
references
References 47 publications
(34 reference statements)
0
50
0
Order By: Relevance
“…Among these industrial studies, only Kamei et al [17] investigated the effectiveness of effort-aware JIT defect identification approach (i.e., EALR) in an industrial setting. The effectiveness of the follow-up effort-aware JIT defect identification approaches (e.g., CBS+ [13], OneWay [7] and unsupervised approaches [23,55]) in an industrial setting has never been investigated. Furthermore, the effectiveness of supervised vs. unsupervised approaches in an industrial setting has never been explored.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Among these industrial studies, only Kamei et al [17] investigated the effectiveness of effort-aware JIT defect identification approach (i.e., EALR) in an industrial setting. The effectiveness of the follow-up effort-aware JIT defect identification approaches (e.g., CBS+ [13], OneWay [7] and unsupervised approaches [23,55]) in an industrial setting has never been investigated. Furthermore, the effectiveness of supervised vs. unsupervised approaches in an industrial setting has never been explored.…”
Section: Introductionmentioning
confidence: 99%
“…This paper is the first study to investigate the effectiveness of recently-proposed effort-aware JIT defect identification approaches in an industrial setting. • We investigated the effectiveness of state-of-the-art supervised (i.e, CBS+ [13], OneWay [7] and EALR [17]) vs. unsupervised (i.e., LT [55] and Code Churn [23]) effort-aware JIT defect identification approaches on Alibaba projects. We investigated the important change-level features for effortaware JIT defect identification on Alibaba projects and their differences compared with open source projects.…”
Section: Introductionmentioning
confidence: 99%
“…Shin et al [29] also showed that NML had better defect tendency prediction performance. Liu et al [31] proposed an NML based unsupervised defect prediction model (CCUM) in effortaware JIT defect prediction, and evaluated the prediction performance of CCUM under cross validation, timewise cross validation, and cross-project validation. The experimental results showed that CCUM performed better than all the prior supervised and unsupervised models.…”
Section: A Process Metricsmentioning
confidence: 99%
“…Then, they attempted to combine complexity metrics with more metrics such as code churn metrics and token frequency metrics [26,31,43,47,48,52,54,54,57,58,58,65,79,81]. Then, advances have been made to use unsupervised machine learning to predict bugs [25,32,36,46,75,76,77,78,80] using the similar set of complexity metrics. These approaches use the similar metrics as those in bug prediction, but do not capture the difference between vulnerable code and buggy code, which hinders the effectiveness.…”
Section: Related Workmentioning
confidence: 99%