2018
DOI: 10.1007/s10664-018-9661-2
|View full text |Cite
|
Sign up to set email alerts
|

Revisiting supervised and unsupervised models for effort-aware just-in-time defect prediction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

5
90
0
2

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 99 publications
(97 citation statements)
references
References 59 publications
5
90
0
2
Order By: Relevance
“…However, their study supports the general goal of Yang et al [8]. Huang et al [29] also performed a replication study based on the study of Yang et al [8], and extended their study in the literature [30]. ey found three weaknesses of the LT model proposed by Yang et al [8]: more context switching, more false alarms, and worse F1 than supervised methods.…”
Section: Effort-aware Software Defect Predictionsupporting
confidence: 54%
“…However, their study supports the general goal of Yang et al [8]. Huang et al [29] also performed a replication study based on the study of Yang et al [8], and extended their study in the literature [30]. ey found three weaknesses of the LT model proposed by Yang et al [8]: more context switching, more false alarms, and worse F1 than supervised methods.…”
Section: Effort-aware Software Defect Predictionsupporting
confidence: 54%
“…Yang [1] greatly validates the importance of having the unsupervised model, as well as their methodology of computing the reciprocal of the raw metric and excluding the LA and LD and then removing the highly correlated data, as this helped in ranking the values obtained in descending order [31]. However, whilst observing the correlation with the target value and availability of sufficient data, it became obvious to opt for the supervised model.…”
Section: Discussionmentioning
confidence: 84%
“…Recently, researchers have put more emphasis on process metrics. A considerable process metrics are proposed, mainly including: i) metrics based on code change history, such as number of modified lines [13], [19], [20], [23], [28]- [33], and code relative change metrics [13], [20], [27], [30], ii) metrics based on developer information, such as number of distinct committers [15], [19], [20], [23]- [26], [30], [32], [34], experience of developers [16], [34], commit activities of developers [34], project team organizational structure [17], code ownership [34], and organizational dispersion degree [18], iii) development process related metrics, such as number of revisions [13]- [15], [19], [20], [22], [23], [25], [30], number of defects repaired [30], number of refactorings [20], [30], code change complexity [21], [32], and number of historical defects [19], [23]. The most widely used, classical, and defect-related process metrics are Number of Revisions (NR), Number of Distinct Committers (NDC), Number of Modified Lines (NML), and code relative change metrics.…”
Section: A Process Metricsmentioning
confidence: 99%