Proceedings 2019 Network and Distributed System Security Symposium 2019
DOI: 10.14722/ndss.2019.23371
|View full text |Cite
|
Sign up to set email alerts
|

REDQUEEN: Fuzzing with Input-to-State Correspondence

Abstract: Automated software testing based on fuzzing has experienced a revival in recent years. Especially feedback-driven fuzzing has become well-known for its ability to efficiently perform randomized testing with limited input corpora. Despite a lot of progress, two common problems are magic numbers and (nested) checksums. Computationally expensive methods such as taint tracking and symbolic execution are typically used to overcome such roadblocks. Unfortunately, such methods often require access to source code, a r… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

5
191
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
2
2

Relationship

1
7

Authors

Journals

citations
Cited by 197 publications
(232 citation statements)
references
References 24 publications
(51 reference statements)
5
191
0
Order By: Relevance
“…Building a fuzzing analysis that does not introduce any false positives is notoriously difficult, and fuzzers that detect memory corruption bugs are not immune to this problem. For example, Aschermann et al [12] point out that previous evaluations erroneously report crashing inputs that exhaust the fuzzer's available memory as bugs in the original program under test. Furthermore, sanitizers point out many different sources of bugs including stack based overflows, use after free, use after return, and heap based overflows.…”
Section: B Ac Detectionmentioning
confidence: 99%
See 1 more Smart Citation
“…Building a fuzzing analysis that does not introduce any false positives is notoriously difficult, and fuzzers that detect memory corruption bugs are not immune to this problem. For example, Aschermann et al [12] point out that previous evaluations erroneously report crashing inputs that exhaust the fuzzer's available memory as bugs in the original program under test. Furthermore, sanitizers point out many different sources of bugs including stack based overflows, use after free, use after return, and heap based overflows.…”
Section: B Ac Detectionmentioning
confidence: 99%
“…Achieving high code coverage on any program under test is a notoriously difficult task because common program patterns like comparing input to magic values or checksum tests are difficult to bypass using fuzzing alone, although program transformation tricks like splitting each comparison into a series of one byte comparisons [36] or simply removing them from the program [46] can improve coverage. Augmenting fuzzing with advanced techniques like taint analysis [50] or symbolic execution [44], [58] helps overcome these fuzzing roadblocks, and RedQueen [12] showed how advanced tracing hardware can emulate these more heavyweight techniques by providing a fuzzer with enough information to establish correspondence between program inputs and internal program state. Prior work has successfully shown fuzz testing can reproduce known AC vulnerabilities in software, and research continues to produce innovative ways to maximize code coverage.…”
Section: Introductionmentioning
confidence: 99%
“…Symbolic execution has the potential to solve complex constraints [8,10] and is used in fuzzing [18,19,11,26,4,34,28,37,41]. One example is Driller, which uses symbolic execution only when the co-running AFL cannot progress due to complicated constrains [34].…”
Section: Related Work 71 Solving Complicated Constraintsmentioning
confidence: 99%
“…One example is Driller, which uses symbolic execution only when the co-running AFL cannot progress due to complicated constrains [34]. Steelix [26] and REDQUEEN [4] detect magic bytes checking and infer their input offsets to solve them without taint analysis. T-Fuzz ignores input checks in the original program and leverages symbolic execution to filter false positives and reproduce true bugs [28].…”
Section: Related Work 71 Solving Complicated Constraintsmentioning
confidence: 99%
“…In the past, fuzz testing ("fuzzing") has proven to be a very successful technique for uncovering novel vulnerabilities in complex applications [11], [17], [38], [40], [45], [53], [54]. Unfortunately, only a limited number of resources on fuzzing hypervisors is available at the moment.…”
Section: Introductionmentioning
confidence: 99%