Ninth International Workshop on Principles of Software Evolution: In Conjunction With the 6th ESEC/FSE Joint Meeting 2007
DOI: 10.1145/1294948.1294954
|View full text |Cite
|
Sign up to set email alerts
|

Learning from bug-introducing changes to prevent fault prone code

Abstract: A version control system, such as CVS/SVN, can provide the history of software changes performed during the evolution of a software project. Among all the changes performed there are some which cause the introduction of bugs, often resolved later with other changes.In this paper we use a technique to identify bug-introducing changes to train a model that can be used to predict if a new change may introduces or not a bug. We represent software changes as elements of a n-dimensional vector space of terms coordin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
25
0

Year Published

2009
2009
2020
2020

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 37 publications
(25 citation statements)
references
References 33 publications
(24 reference statements)
0
25
0
Order By: Relevance
“…In addition they predict faults at the class level of granularity (typically by file), while our level of granularity is by code change, typically spanning only 20 lines of code. Aversano et al [17] achieved 59% buggy precision and recall using KNN (K nearest neighbors) to locate faulty modules. Hata et al [2] showed that a technique used for spam filtering of emails can be successfully used on software modules to classify software as buggy or clean.…”
Section: Prediction On a Given Software Unitmentioning
confidence: 99%
“…In addition they predict faults at the class level of granularity (typically by file), while our level of granularity is by code change, typically spanning only 20 lines of code. Aversano et al [17] achieved 59% buggy precision and recall using KNN (K nearest neighbors) to locate faulty modules. Hata et al [2] showed that a technique used for spam filtering of emails can be successfully used on software modules to classify software as buggy or clean.…”
Section: Prediction On a Given Software Unitmentioning
confidence: 99%
“…This is done by examining the change log messages for indicators that a bug was fixed and assuming that the change that last touched the same code will have introduced the bug. Even though this seems to be a rather strong assumption, prediction accuracies of 35% ) to 60% (Aversano et al 2007) have been reported. Gyimothy et al (2005) study the Chidamber-Kemerer suite of object-oriented metrics (Chidamber-Kemerer 1994) within the Mozilla project 5 .…”
Section: Prediction Of Defectsmentioning
confidence: 98%
“…It is also possible to attempt to predict bugs in new or changed code by identifying the past changes that introduced a bug 艢 liwerski et al 2005;Aversano et al 2007). This is done by examining the change log messages for indicators that a bug was fixed and assuming that the change that last touched the same code will have introduced the bug.…”
Section: Prediction Of Defectsmentioning
confidence: 99%
“…Several past projects introduce and refine an approach to finding fix-inducing commits that is based on creating a link between the bug report database and the code repository using commit messages [9], [10], [5], [2]. Part of the challenge is to trace back from the fix commit to the fix inducing commit while accounting for changes in file structure including line number and method name changes.…”
Section: Related Workmentioning
confidence: 99%
“…But we may not want to examine every revision or commit; just those that are simple, those that fix bugs and those that introduce bugs. Bug fixing commits may be identified by creating links between SCM systems and issue tracking databases such as Bugzilla 1 , Jira 2 and Trac 3 1. http://www.bugzilla.org 2. http://www.jira.com 3. http://trac.edgewall.org [4], [8].…”
Section: Introductionmentioning
confidence: 99%