“…The best case is "nsichneu (17)," where our tool achieved 63% coverage compared to 48% by the best other tool. We show in Appendix, that if we run our tool with the DFS heuristic, we obtain even higher coverage of 69%.…”
Section: Resultsmentioning
confidence: 95%
“…In contrast to our approach, the abstraction is not refined. In [17] conditional model checking is used to generate a residual that represents the program part that has been left unverified; the residual is then tested.…”
Abstract. Concolic testing is a promising method for generating test suites for large programs. However, it suffers from the path-explosion problem and often fails to find tests that cover difficult-to-reach parts of programs. In contrast, model checkers based on counterexample-guided abstraction refinement explore programs exhaustively, while failing to scale on large programs with precision. In this paper, we present a novel method that iteratively combines concolic testing and model checking to find a test suite for a given coverage criterion. If concolic testing fails to cover some test goals, then the model checker refines its program abstraction to prove more paths infeasible, which reduces the search space for concolic testing. We have implemented our method on top of the concolic-testing tool Crest and the model checker CpaChecker. We evaluated our tool on a collection of programs and a category of SvComp benchmarks. In our experiments, we observed an improvement in branch coverage compared to Crest from 48% to 63% in the best case, and from 66% to 71% on average.
“…The best case is "nsichneu (17)," where our tool achieved 63% coverage compared to 48% by the best other tool. We show in Appendix, that if we run our tool with the DFS heuristic, we obtain even higher coverage of 69%.…”
Section: Resultsmentioning
confidence: 95%
“…In contrast to our approach, the abstraction is not refined. In [17] conditional model checking is used to generate a residual that represents the program part that has been left unverified; the residual is then tested.…”
Abstract. Concolic testing is a promising method for generating test suites for large programs. However, it suffers from the path-explosion problem and often fails to find tests that cover difficult-to-reach parts of programs. In contrast, model checkers based on counterexample-guided abstraction refinement explore programs exhaustively, while failing to scale on large programs with precision. In this paper, we present a novel method that iteratively combines concolic testing and model checking to find a test suite for a given coverage criterion. If concolic testing fails to cover some test goals, then the model checker refines its program abstraction to prove more paths infeasible, which reduces the search space for concolic testing. We have implemented our method on top of the concolic-testing tool Crest and the model checker CpaChecker. We evaluated our tool on a collection of programs and a category of SvComp benchmarks. In our experiments, we observed an improvement in branch coverage compared to Crest from 48% to 63% in the best case, and from 66% to 71% on average.
“…First, a reducer for a specific class of conditions was proposed [26]. Then, reducerbased CMC [13] generalized the first approach to use a reducer, named ParComp, which supports all kinds of conditions, and showed that it is indeed a reducer [13].…”
There are many hard verification problems that are currently only solvable by applying several verifiers that are based on complementing technologies. Conditional model checking (CMC) is a successful solution for cooperation between verification tools. In CMC, the first verifier outputs a condition describing the state space that it successfully verified. The second verifier uses the condition to focus its verification on the unverified state space. To use arbitrary second verifiers, we recently proposed a reducer-based approach. One can use the reducer-based approach to construct a conditional verifier from a reducer and a (non-conditional) verifier: the reducer translates the condition into a residual program that describes the unverified state space and the verifier can be any off-the-shelf verifier (that does not need to understand conditions). Until now, only one reducer was available. But for a systematic investigation of the reducer concept, we need several reducers. To fill this gap, we developed FRed, a Framework for exploring different REDucers. Given an existing reducer, FRed allows us to derive various new reducers, which differ in their trade-off between size and precision of the residual program. For our experiments, we derived seven different reducers. Our evaluation on the largest and most diverse public collection of verification problems shows that we need all seven reducers to solve hard verification tasks that were not solvable before with the considered verifiers.
“…Various combinations have been proposed for verification [2,10,11,14,25,27,[29][30][31]35,37,40,50,64] and test-suite generation [1,32,34,36,38,47,51,54,56,59,60,63]. We focus on combinations that interleave approaches.…”
Testing is a widely used method to assess software quality. Coverage criteria and coverage measurements are used to ensure that the constructed test suites adequately test the given software. Since manually developing such test suites is too expensive in practice, various automatic test-generation approaches were proposed. Since all approaches come with different strengths, combinations are necessary in order to achieve stronger tools. We study cooperative combinations of verification approaches for test generation, with high-level information exchange. We present CoVeriTest, a hybrid approach for test-case generation, which iteratively applies different conditional model checkers. Thereby, it allows to adjust the level of cooperation and to assign individual time budgets per verifier. In our experiments, we combine explicit-state model checking and predicate abstraction (from CPAchecker) to systematically study different CoVeriTest configurations. Moreover, CoVeriTest achieves higher coverage than state-of-the-art test-generation tools for some programs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.