2019 49th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN) 2019
DOI: 10.1109/dsn.2019.00026
|View full text |Cite
|
Sign up to set email alerts
|

Deep Validation: Toward Detecting Real-World Corner Cases for Deep Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
17
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 39 publications
(20 citation statements)
references
References 34 publications
0
17
0
Order By: Relevance
“…Most relevant, K. Pei et al [35] apply input space reduction techniques to transform the image, and can simulate a wide range of real-world distortions, noises, and deformations. W. Wu et al [36] present a faults model for deep neural networks classifiers, which includes several corner cases based on the alteration of the input image, including amongst the possible causes brightness, camera alignment, and object movements. Finally, evasion attacks consist in modifying the input to a classifier such that it is misclassified, while keeping the modification as small as possible [37].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Most relevant, K. Pei et al [35] apply input space reduction techniques to transform the image, and can simulate a wide range of real-world distortions, noises, and deformations. W. Wu et al [36] present a faults model for deep neural networks classifiers, which includes several corner cases based on the alteration of the input image, including amongst the possible causes brightness, camera alignment, and object movements. Finally, evasion attacks consist in modifying the input to a classifier such that it is misclassified, while keeping the modification as small as possible [37].…”
Section: Related Workmentioning
confidence: 99%
“…Several works explore how to make the trained agents robust to artificially crafted or accidentally manipulated input images [35], [36], [37], and how to secure a camera from direct attacks that may disrupt the proper behavior of the camera itself [18], [19]. In fact, even slight alterations of the images may alter the output of the trained agent [39].…”
Section: Introductionmentioning
confidence: 99%
“…The type of test cases that these studies addressed were adversarial test cases in 14 occasions (e.g. [48,53,70]) and corner test cases in 3 occasions [22,213,219]. As opposed to the related categories about test case generation and selection, papers in this general category were very focused on conceptualizing different types of test cases.…”
Section: Software Testing (115 Studies)mentioning
confidence: 99%
“…The discrepancy between the assumed input data in the development phase and the real input data observed in the operation phase likely causes undesirable outputs at runtime. Wu et al leveraged the data validation technique to detect real-world corner cases for DNN-based systems [24]. Considering a cyber-physical system employs ML components, Dreossi et al proposed a method to analyze the input space for ML classifiers that can lead to undesirable consequences in the cyber-physical system [25].…”
Section: Related Workmentioning
confidence: 99%