2010
DOI: 10.1007/978-3-642-14295-6_24
|View full text |Cite
|
Sign up to set email alerts
|

An Abstraction-Refinement Approach to Verification of Artificial Neural Networks

Abstract: A key problem in the adoption of artificial neural networks in safety- related applications is that misbehaviors can be hardly ruled out with traditional analytical or probabilistic techniques. In this paper we focus on specific networks known as Multi-Layer Perceptrons (MLPs), and we propose a solution to ver- ify their safety using abstractions to Boolean combinations of linear arithmetic constraints. We show that our abstractions are consistent, i.e., whenever the ab- stract MLP is declared to be safe, the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
200
0

Year Published

2011
2011
2019
2019

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 257 publications
(216 citation statements)
references
References 13 publications
0
200
0
Order By: Relevance
“…And of course all these issues are laid on top of the standard problem of proving that a given software artifact does in fact correctly implement, say, a reinforcement learning algorithm of the intended type. Some work has been done on verifying neural network applications (Pulina and Tacchella 2010;Taylor 2006;Schumann and Liu 2010) and the notion of partial programs (Andre and Russell 2002;Spears 2006) allows the designer to impose arbitrary structural constraints on behavior, but much remains to be done before it will be possible to have high confidence that a learning agent will learn to satisfy its design criteria in realistic contexts. Validity A verification theorem for an agent design has the form, "If environment satisfies assumptions ϕ then behavior satisfies requirements ψ."…”
Section: Professional Ethicsmentioning
confidence: 99%
“…And of course all these issues are laid on top of the standard problem of proving that a given software artifact does in fact correctly implement, say, a reinforcement learning algorithm of the intended type. Some work has been done on verifying neural network applications (Pulina and Tacchella 2010;Taylor 2006;Schumann and Liu 2010) and the notion of partial programs (Andre and Russell 2002;Spears 2006) allows the designer to impose arbitrary structural constraints on behavior, but much remains to be done before it will be possible to have high confidence that a learning agent will learn to satisfy its design criteria in realistic contexts. Validity A verification theorem for an agent design has the form, "If environment satisfies assumptions ϕ then behavior satisfies requirements ψ."…”
Section: Professional Ethicsmentioning
confidence: 99%
“…In [13] we showed that an abstraction mechanism can be devised to yield consistent overapproximations of concrete networks, i.e., once the abstract MLP is proven to be safe, the same holds true for the concrete one. We now outline the abstraction mechanism whose details can be found in [13].…”
Section: Abstractionmentioning
confidence: 76%
“…Abstraction is a key enabler for verification because MLPs are compositions of non-linear and transcendental real-valued functions, and the theories to handle such functions are undecidable [17]. In [13] we showed that an abstraction mechanism can be devised to yield consistent overapproximations of concrete networks, i.e., once the abstract MLP is proven to be safe, the same holds true for the concrete one. We now outline the abstraction mechanism whose details can be found in [13].…”
Section: Abstractionmentioning
confidence: 86%
See 2 more Smart Citations