Proceedings of the 2012 International Symposium on Software Testing and Analysis 2012
DOI: 10.1145/2338965.2336776
|View full text |Cite
|
Sign up to set email alerts
|

Understanding user understanding: determining correctness of generated program invariants

Abstract: Recently, work has begun on automating the generation of test oracles, which are necessary to fully automate the testing process. One approach to such automation involves dynamic invariant generation, which extracts invariants from program executions. To use such invariants as test oracles, however, it is necessary to distinguish correct from incorrect invariants, a process that currently requires human intervention. In this work we examine this process. In particular, we examine the ability of 30 users, acros… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
38
0

Year Published

2014
2014
2022
2022

Publication Types

Select...
3
2
2

Relationship

2
5

Authors

Journals

citations
Cited by 30 publications
(38 citation statements)
references
References 33 publications
0
38
0
Order By: Relevance
“…These approaches typically differ in how the test oracle is constructed, and include: automatically inferring program invariants [Staats et al 2012b;Wei et al 2011]; automatically inferring parametrized test input assertions ; automatically producing concrete test input assertions (as is done by EVOSUITE) [Fraser and Arcuri 2011;Staats et al 2012a]. Currently, there is no scientific consensus which approach is preferable, and in practice all of these approaches appear to be used infrequently by industry.…”
Section: Figmentioning
confidence: 99%
“…These approaches typically differ in how the test oracle is constructed, and include: automatically inferring program invariants [Staats et al 2012b;Wei et al 2011]; automatically inferring parametrized test input assertions ; automatically producing concrete test input assertions (as is done by EVOSUITE) [Fraser and Arcuri 2011;Staats et al 2012a]. Currently, there is no scientific consensus which approach is preferable, and in practice all of these approaches appear to be used infrequently by industry.…”
Section: Figmentioning
confidence: 99%
“…This is typical when evaluating testing approaches; early work refines the approach, after which human studies begin to rigorously assess the human factor (e.g. faultlocalization [16], invariant generation [19]). …”
Section: Discussionmentioning
confidence: 99%
“…In some cases, these approaches include methods for creating test oracles, but such approaches always -albeit often implicitly -require manual intervention by test engineers to inspect and correct the results [9,7]. Evidence supporting the effectiveness of these approaches is mixed, with user studies noting a tendency for test engineers to accept incorrect oracles [19,8].…”
Section: Oracle Data Set Selectionmentioning
confidence: 99%
See 2 more Smart Citations