Proceedings of the 35th IEEE/ACM International Conference on Automated Software Engineering 2020
DOI: 10.1145/3324884.3416667
|View full text |Cite
|
Sign up to set email alerts
|

Revisiting the relationship between fault detection, test adequacy criteria, and test set size

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
24
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
3

Relationship

4
4

Authors

Journals

citations
Cited by 39 publications
(24 citation statements)
references
References 54 publications
0
24
0
Order By: Relevance
“…To investigate the possible changes in grades for our first research question, we designed our test suite sampler to simulate the iterative development of various individual test suites. In order to do this, we utilised the suite growth technique described by Chen et al [14], in which a test suite is extended by randomly selecting an additional test that increases a given criterion, generating a new suite whenever a test is added. The first test suite is created by simply randomly selecting any test from the whole set.…”
Section: Grading Test Suitesmentioning
confidence: 99%
See 1 more Smart Citation
“…To investigate the possible changes in grades for our first research question, we designed our test suite sampler to simulate the iterative development of various individual test suites. In order to do this, we utilised the suite growth technique described by Chen et al [14], in which a test suite is extended by randomly selecting an additional test that increases a given criterion, generating a new suite whenever a test is added. The first test suite is created by simply randomly selecting any test from the whole set.…”
Section: Grading Test Suitesmentioning
confidence: 99%
“…As with the mutation score property, we only use mutants with unique combinations of passing tests, to avoid bias from different proportions of similar mutants. Since it is possible for suites to detect every mutant, we use a "stacking" approach to continue growing the suite [14,15]; the suite's current mutation score is reset to zero by the generator, so every available test that detects any mutants can be selected. Once there is only one unselected test, our generator ends the generation run.…”
Section: Grading Test Suitesmentioning
confidence: 99%
“…An alternative approach that addresses this limitation is mutation analysis, which systematically seeds artificial faults, called mutants, into a program and measures a test suite's ability to detect them [4]. Mutation analysis addresses is widely considered the best approach for evaluating test suite efficacy [5], [6], [7]. Mutation testing is an iterative testing approach that builds on top of mutation analysis and uses undetected mutants as concrete test goals to guide the testing process.…”
Section: Introductionmentioning
confidence: 99%
“…However, we know that coverage can often be quickly saturated [14], and that adequate coverage is a necessary, but not sufficient condition for all bugs to be detected. Indeed, fuzzing researchers are aware of the limitations of coverage as a proxy measure [15].…”
Section: Introductionmentioning
confidence: 99%
“…Mutation analysis is free of all the above problems that we identified. For example, mutation analysis is much harder to saturate than code coverage [14], and is more robust than various forms of coverage as a proxy for the fault revealing power of the test suite.…”
Section: Introductionmentioning
confidence: 99%