2017
DOI: 10.1002/smr.1925
|View full text |Cite
|
Sign up to set email alerts
|

Evaluating Pred(p) and standardized accuracy criteria in software development effort estimation

Abstract: Software development effort estimation (SDEE) plays a primary role in software project management. But choosing the appropriate SDEE technique remains elusive for many project managers and researchers. Moreover, the choice of a reliable estimation accuracy measure is crucial because SDEE techniques behave differently given different accuracy measures. The most widely used accuracy measures in SDEE are those based on magnitude of relative error (MRE) such as mean/median MRE (MMRE/MedMRE) and prediction at level… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
51
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
5

Relationship

3
2

Authors

Journals

citations
Cited by 50 publications
(55 citation statements)
references
References 44 publications
1
51
0
Order By: Relevance
“…In spite of the widespread use of MMRE among researchers, it has been criticized for being biased toward underestimation making it an untrustworthy and unreliable accuracy measure. With regard to Pred(p), the study carried out by Idri et al showed that Pred(p) is less biased to underestimation and can be used as a reliable accuracy measure. Therefore, Pred(p) was retained in this study whereas MMRE was discarded.…”
Section: Empirical Designmentioning
confidence: 99%
“…In spite of the widespread use of MMRE among researchers, it has been criticized for being biased toward underestimation making it an untrustworthy and unreliable accuracy measure. With regard to Pred(p), the study carried out by Idri et al showed that Pred(p) is less biased to underestimation and can be used as a reliable accuracy measure. Therefore, Pred(p) was retained in this study whereas MMRE was discarded.…”
Section: Empirical Designmentioning
confidence: 99%
“…This section describes the measures used to assess the performance of single and ensemble FA estimation techniques from 2 perspectives: Standardized accuracy (SA) and effect size assess the reasonability of a given estimation technique, ie, they check whether the estimation technique is actually predicting or guessing with respect to a baseline model; The other 5 performance criteria evaluate the accuracy of the estimation technique. …”
Section: Empirical Designmentioning
confidence: 99%
“…The Borda count method was used to draw the final ranking by combining the ranks provided by the 5 performance measures. The rationale for using many performance measures is that prior studies showed that selection of the best estimation technique depends on which performance indicator was used since relying on only 1 criterion may lead to biased conclusions . In other words, each technique can be ranked differently according to different accuracy criteria, which can lead to contradictory results (ie, 1 criterion selects model A as the best, whereas another selects model B) .…”
Section: Empirical Designmentioning
confidence: 99%
See 2 more Smart Citations