2022
DOI: 10.3390/math10244649
|View full text |Cite
|
Sign up to set email alerts
|

Propose-Specific Information Related to Prediction Level at x and Mean Magnitude of Relative Error: A Case Study of Software Effort Estimation

Abstract: The prediction level at x (PRED(x)) and mean magnitude of relative error (MMRE) are measured based on the magnitude of relative error between real and predicted values. They are the standard metrics that evaluate accurate effort estimates. However, these values might not reveal the magnitude of over-/under-estimation. This study aims to define additional information associated with the PRED(x) and MMRE to help practitioners better interpret those values. We propose the formulas associated with the PRED(x) and … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
0
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(6 citation statements)
references
References 38 publications
0
0
0
Order By: Relevance
“…Besides that, the absolute valuable information related to MMRE and PRED(0.25) (sig Left , sig Right , sig) attained from PytEffort might be slightly smaller than that from EnsEffort/Group 1. As mentioned in [63], PytEffort might be more stable than EnsEffort/Group 1. Additionally, the p-value computed by Mann-Whitney Utest between EnsEffort/Group 1 and PytEffort is less than 0.05, demonstrating that there is a statistically significant difference in the medians between the two methodologies.…”
Section: Resultsmentioning
confidence: 86%
See 2 more Smart Citations
“…Besides that, the absolute valuable information related to MMRE and PRED(0.25) (sig Left , sig Right , sig) attained from PytEffort might be slightly smaller than that from EnsEffort/Group 1. As mentioned in [63], PytEffort might be more stable than EnsEffort/Group 1. Additionally, the p-value computed by Mann-Whitney Utest between EnsEffort/Group 1 and PytEffort is less than 0.05, demonstrating that there is a statistically significant difference in the medians between the two methodologies.…”
Section: Resultsmentioning
confidence: 86%
“…Evaluation measures are used to determine how effective the proposed approach is. Evaluation measures such as MBRE, MIBRE, MAE, SA, and MMRE, PRED(0.25) with their useful information were utilized, as previously reported [4], [59], [62], [63]. Because of this, the results of the experiments in this research can be applied to a much larger group of projects.…”
Section: F Validity Evaluationmentioning
confidence: 99%
See 1 more Smart Citation
“…(m)), Accuracy, Mean Absolute Error (MAE), Root Mean Squared Error (RMSE), Relative Absolute Error (RAE) and Root Relative Squared Error (RRSE). Detailed descriptions of the measures, as well as the reasons for their use, are given in [7,21,36].…”
Section: Prediction Accuracymentioning
confidence: 99%
“…In software estimation, the mean of magnitude relative error (MMRE) [30,31], and Pred (x) [31] be used for the evaluation of the most likely estimate of effort. Meanwhile, magnitude relative error (MRE) is obtained using Eq.…”
Section: Comparison With Ufp Input On Cocomo IImentioning
confidence: 99%