Proceedings of the 2006 International Workshop on Software Quality 2006
DOI: 10.1145/1137702.1137712
|View full text |Cite
|
Sign up to set email alerts
|

Revisiting the problem of using problem reports for quality assessment

Abstract: In this paper, we describe our experience with using problem reports from industry for quality assessment. The non-uniform terminology used in problem reports and validity concerns have been subject of earlier research but are far from settled. To distinguish between terms such as defects or errors, we propose to answer three questions on the scope of a study related to what (problem appearance or its cause), where (problems related to software; executable or not; or system), and when (problems recorded in all… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2007
2007
2010
2010

Publication Types

Select...
3
2
1

Relationship

2
4

Authors

Journals

citations
Cited by 8 publications
(9 citation statements)
references
References 18 publications
0
9
0
Order By: Relevance
“…The related terminology in this area (fault, error, cause or reason, failure, bug, defect, and anomaly) is often confusing because these terms are used interchangeably and inconsistently by many in industry and academia; see further discussion in Mohaghegi et al [27]. Therefore we define the following terms with inspiration from earlier work from Avižienis & Laprie [10] and Thane [9], where a fault is the static origin in the code, that during dynamic execution propagates (in Figure 1 described as by a solid arrow) to an error (which is an intermediate infection of the code).…”
Section: Terminologymentioning
confidence: 99%
“…The related terminology in this area (fault, error, cause or reason, failure, bug, defect, and anomaly) is often confusing because these terms are used interchangeably and inconsistently by many in industry and academia; see further discussion in Mohaghegi et al [27]. Therefore we define the following terms with inspiration from earlier work from Avižienis & Laprie [10] and Thane [9], where a fault is the static origin in the code, that during dynamic execution propagates (in Figure 1 described as by a solid arrow) to an error (which is an intermediate infection of the code).…”
Section: Terminologymentioning
confidence: 99%
“…Studies that rely on fault reports have expressed several validity concerns regarding the reliability of these reports as a source of data for empirical research. Several of these concerns, including subjectivity, inaccuracy, and ambiguity of the reports, have been discussed by Mohagheghi et al [14] and Ostrand et al [15]. The use of the history of the source code changes has increased the validity of the results obtained from the present study.…”
Section: Introductionmentioning
confidence: 64%
“…Previous studies have dealt with predicting incident volumes by statistical methods [8], service desk and incident management challenges [9], activity-based management of IT service delivery [10], knowledge management-centric help desk [11], the maturity of the problem management process [12], using problem reports for quality assessments [13], dynamic change management model for managing IT changes [14], release management in component-based development [15], open source software releases [16], critical elements of the patch management process [17], and patch management challenges [18].…”
Section: Related Workmentioning
confidence: 99%