Proceedings of the 28th ACM SIGSOFT International Symposium on Software Testing and Analysis 2019
DOI: 10.1145/3293882.3330579
|View full text |Cite
|
Sign up to set email alerts
|

DeepHunter: a coverage-guided fuzz testing framework for deep neural networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
215
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 313 publications
(216 citation statements)
references
References 23 publications
1
215
0
Order By: Relevance
“…The results of accuracy in above two sections are based on the original testing data. To further investigate the quality of migrated/quantized models, we combine the existing tools TENSORFUZZ [46] and DEEPHUNTER [67] as data generator. We generate a largescale testing data by using MNIST and CIFAR-10 as inputs to capture the differential behaviors between the PC model and the migrated/quantized model.…”
Section: ) Migration and Quantization On Generated Datamentioning
confidence: 99%
See 1 more Smart Citation
“…The results of accuracy in above two sections are based on the original testing data. To further investigate the quality of migrated/quantized models, we combine the existing tools TENSORFUZZ [46] and DEEPHUNTER [67] as data generator. We generate a largescale testing data by using MNIST and CIFAR-10 as inputs to capture the differential behaviors between the PC model and the migrated/quantized model.…”
Section: ) Migration and Quantization On Generated Datamentioning
confidence: 99%
“…DeepXplore [50] and DeepGauge [42] proposed the new testing criteria for deep learning testing. DeepTest [64], DeepHunter [67] and TensorFuzz [46] proposed coverageguided testing techniques, which mainly focus on feedforward neural networks. DeepStellar [26] is proposed to perform the quantitative analysis for recurrent neural networks (RNN).…”
Section: Deep Learning Testingmentioning
confidence: 99%
“…They often exploit them to drive test input generation. Since classical adequacy criteria based on the code's control flow graph are ineffective with NNs, as typically 100% control flow coverage of the code of an NN can be easily reached with few inputs, researchers have defined novel test adequacy criteria specifically targeted to neural networks (Kim et al 2019;Ma et al 2018bMa et al , 2019Sekhon and Fleming 2019;Sun et al 2018a, b;Pei et al 2017;Shen et al 2018;Guo et al 2018;Xie et al 2019).…”
Section: Addressed Problem (Rq 11)mentioning
confidence: 99%
“…Five works (7%) manipulate only the input data, i.e., they perform input level testing (Bolte et al 2019;Byun et al 2019;Henriksson et al 2019;Wolschke et al 2018). The majority of the papers (64%) operate at the ML model level (model level testing) (Cheng et al 2018a;Ding et al 2017;Du et al 2019;Dwarakanath et al 2018;Eniser et al 2019;Gopinath et al 2018;Groce et al 2014;Guo et al 2018;Kim et al 2019;Li et al 2018;Ma et al 2018bMa et al , c, d, 2019Murphy et al 2007aMurphy et al , b, 2008Murphy et al , b, 2009Nakajima and Bui 2016, 2019Odena et al 2019;Patel et al 2018;Pei et al 2017;Qin et al 2018;Saha and Kanewala 2019;Sekhon and Fleming 2019;Shen et al 2018;Shi et al 2019;Spieker and Gotlieb 2019;Strickland et al 2018;Sun et al 2018a, b;Tian et al 2018;Udeshi and Chattopadhyay 2019;Udeshi et al 2018;Uesato et al 2019;Xie et al 2018Xie et al , 2019Xie et al , 2011Zhang et al 2018aZhang et al , b, 2019Zhao a...…”
Section: Cost Of Testingmentioning
confidence: 99%
See 1 more Smart Citation