2017
DOI: 10.48550/arxiv.1712.01785
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Towards Practical Verification of Machine Learning: The Case of Computer Vision Systems

Abstract: Due to the increasing usage of machine learning (ML) techniques in security-and safety-critical domains, such as autonomous systems and medical diagnosis, ensuring correct behavior of ML systems, especially for different corner cases, is of growing importance. In this paper, we propose a generic framework for evaluating security and robustness of ML systems using different real-world safety properties. We further design, implement and evaluate VERIVIS, a scalable methodology that can verify a diverse set of sa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
35
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 33 publications
(35 citation statements)
references
References 31 publications
(51 reference statements)
0
35
0
Order By: Relevance
“…They report a significant performance drop of an object detection model when evaluated on corrupted data. Pei et al [2017] introduce VeriVis, a framework to evaluate the security and robustness of different object recognition models using real-world image corruptions such as brightness, contrast, rotations, smoothing, blurring, and others.…”
Section: Contributionsmentioning
confidence: 99%
“…They report a significant performance drop of an object detection model when evaluated on corrupted data. Pei et al [2017] introduce VeriVis, a framework to evaluate the security and robustness of different object recognition models using real-world image corruptions such as brightness, contrast, rotations, smoothing, blurring, and others.…”
Section: Contributionsmentioning
confidence: 99%
“…Robustness against simple transformations Approaches targeting adversarial accuracy for simple transformations have used attacks and defenses in the spirit of PGD (either on transformation space [11] or on input space projecting to transformation manifold [20]) and simple random or grid search [11,34]. Recent work [10] has also evaluated some rotation-equivariant networks with different training and attack settings which reduces direct comparability with e.g.…”
Section: Related Workmentioning
confidence: 99%
“…Recently, it was observed in [11,13,34,20,14,2] that worst-case prediction performance drops dramatically for neural network classifiers obtained using standard training, even for rather small transformation sets. In this context, we examine the effectiveness of regularization that explicitly encourages the predictor to be constant for transformed versions of the same image, which we refer to as being invariant on the transformation sets.…”
Section: Introductionmentioning
confidence: 99%
“…Additionally, as ML algorithms carry uncertainty in their outputs, the benefits of including prediction uncertainty in the overall system specification is an open and under investigation topic [151]. Focusing on invariance specifications, Pei et al [172] decompose safety properties for common real-world image distortions into 12 transformation invariance properties that a ML algorithm should maintain. Based on these specifications, they verify safety properties of the trained model using samples from the target domain.…”
Section: Formal Specificationmentioning
confidence: 99%