Proceedings of the 36th International Conference on Software Engineering 2014
DOI: 10.1145/2568225.2568269
|View full text |Cite
|
Sign up to set email alerts
|

Comparing static bug finders and statistical prediction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
92
0

Year Published

2015
2015
2021
2021

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 103 publications
(94 citation statements)
references
References 25 publications
2
92
0
Order By: Relevance
“…Prior research has assumed that 5% (sometimes 20%) of the code could realistically be inspected under deadline [44]. Additionally, Rahman et al compare SBF with DP (a file level statistical defect predictor) by allowing the number warnings from SBF to set the inspection budget (denoted AUCECL) [43]. They assign the DP the same budget and compare the resulting AUCEC scores.…”
Section: Evaluating Defect Predictionsmentioning
confidence: 99%
See 1 more Smart Citation
“…Prior research has assumed that 5% (sometimes 20%) of the code could realistically be inspected under deadline [44]. Additionally, Rahman et al compare SBF with DP (a file level statistical defect predictor) by allowing the number warnings from SBF to set the inspection budget (denoted AUCECL) [43]. They assign the DP the same budget and compare the resulting AUCEC scores.…”
Section: Evaluating Defect Predictionsmentioning
confidence: 99%
“…A prediction model is awarded credit, ranging from 0 to 1, for each (ipso facto eventually) buggy line flagged as suspicious. Previous work by Rahman et al has compared SBF and DP models using two types of credit: full (or optimistic) and partial (or scaled) credit [43], which we adapt to line level defect prediction. The former metric awards a model one credit point for each bug iff at least one line of the bug was marked buggy by the model.…”
Section: Evaluating Defect Predictionsmentioning
confidence: 99%
“…For example, Kocaguneli et al combined several single software effort estimation models to create more powerful multi-model ensembles [21]. Also, Rahman et al used static bug-finding to improve the performance of statistical defect prediction and vice versa [38].…”
Section: Related Workmentioning
confidence: 99%
“…Rahman et.al. [34] argue, that a more efficient way of comparing quality assurance efforts, when the defect prediction models are involved, is by a comparison of AUCEC (Area Under Cost Efficiency Curve) values [1]. The approach followed in our paper was motivated by the fact that the comparison of cost values (actual and simulated) is considered to be more readable by our business…”
Section: Threats To Validitymentioning
confidence: 99%