2021
DOI: 10.48550/arxiv.2110.01388
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Neural Network Verification in Control

Abstract: Learning-based methods could provide solutions to many of the long-standing challenges in control. However, the neural networks (NNs) commonly used in modern learning approaches present substantial challenges for analyzing the resulting control systems' safety properties. Fortunately, a new body of literature could provide tractable methods for analysis and verification of these high dimensional, highly nonlinear representations. This tutorial first introduces and unifies recent techniques (many of which origi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 53 publications
(104 reference statements)
0
2
0
Order By: Relevance
“…Adversarial Perturbation: An intensifying challenge against deep neural network based systems is adversarial perturbation for making incorrect decisions. Many gradientbased noise-generating methods (Goodfellow, Shlens, and Szegedy 2015;Huang et al 2017;Everett 2021) have been conducted for misclassification and mislead an agent's output action. As an example of using DRL model playing Atari games, an adversarial attacker (Lin et al 2017;Yang et al 2020) could jam in a timely and barely detectable noise to maximize the prediction loss of a Q-network and cause massively degraded performance.…”
Section: Related Workmentioning
confidence: 99%
“…Adversarial Perturbation: An intensifying challenge against deep neural network based systems is adversarial perturbation for making incorrect decisions. Many gradientbased noise-generating methods (Goodfellow, Shlens, and Szegedy 2015;Huang et al 2017;Everett 2021) have been conducted for misclassification and mislead an agent's output action. As an example of using DRL model playing Atari games, an adversarial attacker (Lin et al 2017;Yang et al 2020) could jam in a timely and barely detectable noise to maximize the prediction loss of a Q-network and cause massively degraded performance.…”
Section: Related Workmentioning
confidence: 99%
“…Another way of approaching safety verification is by reformulating it as a reachability problem, see e.g., [20], [21], [22], [23], [24], [25] which has been of particular interest for the safety verification of closed-loop dynamical systems [26]. While heuristics for reachability problems over dynamical systems exist, these methods are typically computationally expensive.…”
Section: Introductionmentioning
confidence: 99%