Proceedings of 16th International Conference on Software Engineering
DOI: 10.1109/icse.1994.296778
|View full text |Cite
|
Sign up to set email alerts
|

Experiments on the effectiveness of dataflow- and control-flow-based test adequacy criteria

Abstract: This paper reports an experimental study investigating

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
648
1
6

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 544 publications
(657 citation statements)
references
References 16 publications
2
648
1
6
Order By: Relevance
“…This is especially important when assessing the performance of a research approach. A large body of the literature has resorted to extensive empirical studies for devising a reliable experimental protocol [30][31][32]. Recently, Allix et al have proposed a large-scale empirical studies on the dataset sizes used in the assessment of machine learning-based malware detection approaches [23].…”
Section: Related Workmentioning
confidence: 99%
“…This is especially important when assessing the performance of a research approach. A large body of the literature has resorted to extensive empirical studies for devising a reliable experimental protocol [30][31][32]. Recently, Allix et al have proposed a large-scale empirical studies on the dataset sizes used in the assessment of machine learning-based malware detection approaches [23].…”
Section: Related Workmentioning
confidence: 99%
“…Despite results in the literature for the Siemens subjects showing that the whole-program counterparts of BR and, especially, DU, perform better than random testing [8], these strategies were not useful for the parts of the program affected by each change. This result can be explained by the low coverage levels attained for BR and DU in these subjects, which ranged approximately between 50-90%, and which were lower than for whole-program testing for which 100% coverage was often achieved in the same subjects.…”
Section: Results and Analysismentioning
confidence: 72%
“…Therefore, we decided to start our investigation on a number of subjects from the Siemens suite [8] that we translated from C to Java. These subjects are listed in Table I, where the columns show, respectively, the name of the subject, a short description, the size in lines of code (LOC), the number of test cases available, and the number of changes used in our study.…”
Section: ) Implementationmentioning
confidence: 99%
“…To evaluate bug localisation techniques, the Siemens Programs [15] are often used [1,2,13] as a reference suite of C programs artificially instrumented with different bugs. More specifically, it usually is just a small subset of this benchmark which is used.…”
Section: Discussionmentioning
confidence: 99%
“…If the precision rises significantly when adding graphs containing a certain method, this method is deemed more likely to contain a bug. Experiments with five out of 130 bugs from the Siemens Programs [15] demonstrate good classification performance, but do not evaluate the precision of the bug localisation. Furthermore, the authors do not generate a ranking of methods suspected to contain a bug.…”
Section: Call Graph Based Fault Detectionmentioning
confidence: 99%