2019
DOI: 10.48550/arxiv.1903.06758
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Algorithms for Verifying Deep Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
54
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
3
2

Relationship

1
9

Authors

Journals

citations
Cited by 48 publications
(58 citation statements)
references
References 33 publications
0
54
0
Order By: Relevance
“…Such works are complementary to ours in the sense that they provide a convergence analysis of an existing algorithm for deep learning. In a different line of work, Liu et al (2019a) propose to exploit interpolation to prove convergence of a new acceleration method for deep learning. However, their experiments suggest that the method still requires the use of a hand-designed learning-rate schedule.…”
Section: Related Workmentioning
confidence: 99%
“…Such works are complementary to ours in the sense that they provide a convergence analysis of an existing algorithm for deep learning. In a different line of work, Liu et al (2019a) propose to exploit interpolation to prove convergence of a new acceleration method for deep learning. However, their experiments suggest that the method still requires the use of a hand-designed learning-rate schedule.…”
Section: Related Workmentioning
confidence: 99%
“…This makes it difficult to quantify how the output varies given controlled semantically-aligned variations in the input from our unit tests. While there are works that aim to bring interpretable formal verification to DL models [9,10], the scale is still far from the millions if not billions of parameters used in contemporary models [11][12][13].…”
Section: Introductionmentioning
confidence: 99%
“…Another approach to address safety for RL policies is to utilize a safety layer which filters out unsafe actions as proposed in [1], [13], [14]. This safety layer must be easily verifiable without any black boxes such as deep neural networks inside its architecture which are hard to verify [15]. In [1] authors used safety constraints in order to prevent unsafe lane changes from a RL agent.…”
Section: Introductionmentioning
confidence: 99%