2022
DOI: 10.48550/arxiv.2201.05401
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Deep Learning for Agile Effort Estimation Have We Solved the Problem Yet?

Vali Tawosi,
Rebecca Moussa,
Federica Sarro

Abstract: In the last decade, several studies have proposed the use of automated techniques to estimate the effort of agile software development. In this paper we perform a close replication and extension of a seminal work proposing the use of Deep Learning for agile effort estimation (namely Deep-SE), which has set the state-of-the-art since. Specifically, we replicate three of the original research questions aiming at investigating the effectiveness of Deep-SE for both within-project and cross-project effort estimatio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 31 publications
(122 reference statements)
0
2
0
Order By: Relevance
“…A recent study on Deep-SE and future directions for SPE. In June 2022 (after the time our paper was submitted to ESEM 2022), Tawosi et al [32] conducted a replication study on Deep-SE and compared Deep-SE over the traditional text regression approach using Term Frequency Inverse Document Frequency (TFIDF) for input feature extraction, called TFIDF-SE. They had some findings and conclusions.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…A recent study on Deep-SE and future directions for SPE. In June 2022 (after the time our paper was submitted to ESEM 2022), Tawosi et al [32] conducted a replication study on Deep-SE and compared Deep-SE over the traditional text regression approach using Term Frequency Inverse Document Frequency (TFIDF) for input feature extraction, called TFIDF-SE. They had some findings and conclusions.…”
Section: Related Workmentioning
confidence: 99%
“…Deep-SE required 2-8 hours for pre-training to get the vector representation of issues for each single software project in their dataset consisting of 16 projects. However, the replication study conducted by Tawosi et al [32] showed that the Deep-SE didn't statistically outperform the classical non-neural networks approach TFIDF-SE which is much more faster in model construction. GPT2SP relied on the expensive language model GPT-2 which was trained by the content of 8 million websites in different domains compared to SE.…”
Section: Introductionmentioning
confidence: 96%