2022
DOI: 10.1109/tse.2020.3040793
|View full text |Cite
|
Sign up to set email alerts
|

Learning From Mistakes: Machine Learning Enhanced Human Expert Effort Estimates

Abstract: In this paper, we introduce a novel approach to predictive modeling for software engineering, named Learning From Mistakes (LFM). The core idea underlying our proposal is to automatically learn from past estimation errors made by human experts, in order to predict the characteristics of their future misestimates, therefore resulting in improved future estimates. We show the feasibility of LFM by investigating whether it is possible to predict the type, severity and magnitude of errors made by human experts whe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3
1

Relationship

2
5

Authors

Journals

citations
Cited by 14 publications
(11 citation statements)
references
References 93 publications
0
11
0
Order By: Relevance
“…Meanwhile, estimating the most realistic amount of effort in the early stage of software development is difficult since the information available at that stage is usually incomplete and uncertain. Although construction of formal software effort estimation models started in the very early times of the industrialization of software production, expert judgement still remains the dominant strategy for effort prediction in practice where the accuracy of the estimate is sensitive to the practitioner's expertise and thus prone to bias [53], [87]. Early work to build an estimation technique tried to find a set of factors related to the software size and cost by using regression analysis [9].…”
Section: Software Effort Estimationmentioning
confidence: 99%
See 2 more Smart Citations
“…Meanwhile, estimating the most realistic amount of effort in the early stage of software development is difficult since the information available at that stage is usually incomplete and uncertain. Although construction of formal software effort estimation models started in the very early times of the industrialization of software production, expert judgement still remains the dominant strategy for effort prediction in practice where the accuracy of the estimate is sensitive to the practitioner's expertise and thus prone to bias [53], [87]. Early work to build an estimation technique tried to find a set of factors related to the software size and cost by using regression analysis [9].…”
Section: Software Effort Estimationmentioning
confidence: 99%
“…Construct validity deals with the degree to which the predictor and response variable measures what they suppose to be measuring [71]. To mitigate possible threats arising from the predictors and target variables used to build the prediction models we used five publicly available datasets which were collected from real-world projects and contain reliable measures of software effort and size, such as Function Point, which are still in use in industrial settings and widely used in research studies [2], [27], [87]. Moreover, this data has been widely used in previous empirical studies to validate estimation models [10], [47], [48], [87].…”
Section: Threats To Validitymentioning
confidence: 99%
See 1 more Smart Citation
“…Choetkiertikul et al [5] also collected eight open-source projects stored in six open-source repositories, aiming at benchmarking Deep-SE against Porru's TF/IDF-SE approach using a common dataset. 7 In total, the Porru dataset, as collected by Choetkiertikul et al [5], [15], contains 4,904 issues. Among these eight projects, six are common with the Choe dataset (i.e., TIMOB, TISTUD, APSTUD, MESOS, MULE, and XD), although they contain a different subset of issues as Porru et al applied a set of more restrictive filtering criteria than those used by Choetkiertikul et al in building the Choet dataset [5].…”
Section: The Porru Datasetmentioning
confidence: 99%
“…Story Point (SP) is commonly used to measure the effort needed to implement a user story [2], [4] and agile teams mainly rely on expert-based estimation [1], [5]. However, similar to traditional software project effort estimation [6], [7], task-level effort estimation is not immune to the expert's subjective assessment [4]. Subjective assessment may not only lead to inaccurate estimations but also, and more importantly to an agile team, it may introduce inconsistency in estimates throughout different sprints.…”
Section: Introductionmentioning
confidence: 99%