Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
2020
DOI: 10.48550/arxiv.2010.01785
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

UNIFUZZ: A Holistic and Pragmatic Metrics-Driven Platform for Evaluating Fuzzers

Abstract: A flurry of fuzzing tools (fuzzers) have been proposed in the literature, aiming at detecting software vulnerabilities effectively and efficiently. To date, it is however still challenging to compare fuzzers due to the inconsistency of the benchmarks, performance metrics, and/or environments for evaluation, which buries the useful insights and thus impedes the discovery of promising fuzzing primitives. In this paper, we design and develop UNIFUZZ, an open-source and metrics-driven platform for assessing fuzzer… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
0
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 30 publications
0
0
0
Order By: Relevance
“…This period of stunted progress had a deluge of academic papers which claimed superiority over AFL [46], [65], [69], [72], [83], [110], [121], with the last algorithmic update in 2017 [201]. Despite the appearance of progress, several recent papers have independently evaluated AFL with many of its derivatives and concluded that "superiority" is marginal [170], [182]. Many of the claimed victories were limited to specific conditions of the evaluation methodology, but failed to generalize when other researchers completed independent testing.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…This period of stunted progress had a deluge of academic papers which claimed superiority over AFL [46], [65], [69], [72], [83], [110], [121], with the last algorithmic update in 2017 [201]. Despite the appearance of progress, several recent papers have independently evaluated AFL with many of its derivatives and concluded that "superiority" is marginal [170], [182]. Many of the claimed victories were limited to specific conditions of the evaluation methodology, but failed to generalize when other researchers completed independent testing.…”
Section: Introductionmentioning
confidence: 99%
“…In the discipline of machine learning, Dehghani et al [166] examines the critical importance of canonical benchmarks and how the bias of benchmarks shapes the collective decisions about which algorithms are better while perhaps losing perspective on the implications of the assumed bias in the benchmarks. Other studies examine the biases within existing benchmarks or propose new benchmarks [141], [170], [183].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation