2017 32nd IEEE/ACM International Conference on Automated Software Engineering (ASE) 2017
DOI: 10.1109/ase.2017.8115669
|View full text |Cite
|
Sign up to set email alerts
|

Automatic testing of symbolic execution engines via program generation and differential testing

Abstract: Abstract-Symbolic execution has attracted significant attention in recent years, with applications in software testing, security, networking and more. Symbolic execution tools, like CREST, KLEE, FuzzBALL, and Symbolic PathFinder, have enabled researchers and practitioners to experiment with new ideas, scale the technique to larger applications and apply it to new application domains. Therefore, the correctness of these tools is of critical importance.In this paper, we present our experience extending compiler … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
20
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 33 publications
(20 citation statements)
references
References 33 publications
0
20
0
Order By: Relevance
“…The idea of randomly generating or mutating programs to induce errors in production compilers and interpreters has a long history, with grammar-or mutationbased fuzzers having been designed to test implementations of languages such as COBOL [Sauder 1962], PL/I [Hanford 1970], FORTRAN [Burgess and Saidi 1996], Ada and Pascal [Wichmann 1998], and more recently C [Le et al 2014[Le et al , 2015aNagai et al 2014;Nakamura and Ishiura 2016;Sun et al 2016a;Yang et al 2011;Yarpgen 2018], JavaScript and PHP [Holler et al 2012], Java byte-code [Chen et al 2016], OpenCL [Lidbury et al 2015], GLSL [Donaldson et al 2017;Donaldson and Lascu 2016] and C++ [Sun et al 2016b] (see also two surveys on the topic [Boujarwah and Saleh 1997;Kossatchev and Posypkin 2005]). Related approaches have been used to test other programming language processors, such as static analysers , refactoring engines [Daniel et al 2007], and symbolic executors [Kapus and Cadar 2017]. Many of these approaches are either geared towards inducing crashes, for which the test oracle problem is easy.…”
Section: Related Workmentioning
confidence: 99%
“…The idea of randomly generating or mutating programs to induce errors in production compilers and interpreters has a long history, with grammar-or mutationbased fuzzers having been designed to test implementations of languages such as COBOL [Sauder 1962], PL/I [Hanford 1970], FORTRAN [Burgess and Saidi 1996], Ada and Pascal [Wichmann 1998], and more recently C [Le et al 2014[Le et al , 2015aNagai et al 2014;Nakamura and Ishiura 2016;Sun et al 2016a;Yang et al 2011;Yarpgen 2018], JavaScript and PHP [Holler et al 2012], Java byte-code [Chen et al 2016], OpenCL [Lidbury et al 2015], GLSL [Donaldson et al 2017;Donaldson and Lascu 2016] and C++ [Sun et al 2016b] (see also two surveys on the topic [Boujarwah and Saleh 1997;Kossatchev and Posypkin 2005]). Related approaches have been used to test other programming language processors, such as static analysers , refactoring engines [Daniel et al 2007], and symbolic executors [Kapus and Cadar 2017]. Many of these approaches are either geared towards inducing crashes, for which the test oracle problem is easy.…”
Section: Related Workmentioning
confidence: 99%
“…Testing symbolic execution engines. Kapus and Cadar use random program generation in combination with differential testing to find bugs in symbolic execution engines [9], by for instance comparing crashes, output differences, and code coverage. Unlike our approach, this work specifically targets symbolic execution engines and compares the tested engines on randomly generated programs.…”
Section: Related Workmentioning
confidence: 99%
“…General evaluation of program analysis techniques. While related works mainly evaluate the existing program analysis tools by focusing on the final goal for which these tools are being used, the research community has started to invest effort into ensuring the preciseness and reliability of program analysis tools, regardless of the problem they are trying to solve [11], [28].…”
Section: Related Workmentioning
confidence: 99%
“…As an example, Kapus et al [28] adopted compiler testing techniques to automatically find errors in symbolic execution engines. They managed to find 20 major bugs in three widelyused symbolic execution engines.…”
Section: Related Workmentioning
confidence: 99%