2021
DOI: 10.1007/978-3-030-88494-9_3
|View full text |Cite
|
Sign up to set email alerts
|

Into the Unknown: Active Monitoring of Neural Networks

Abstract: Machine-learning techniques achieve excellent performance in modern applications. In particular, neural networks enable training classifiers-often used in safety-critical applications-to complete a variety of tasks without human supervision. Neural-network models have neither the means to identify what they do not know nor to interact with the human user before making a decision. When deployed in the real world, such models work reliably in scenarios they have seen during training. In unfamiliar situations, ho… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
13
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 18 publications
(13 citation statements)
references
References 34 publications
(23 reference statements)
0
13
0
Order By: Relevance
“…Due to the pervasiveness of adversarial inputs [20,27,50,55,56,70,87], the machine learning community has put a great deal of effort into measuring and improving the robustness of networks [14,15,22,29,44,49,58,61,62,78,84]. The formal methods community has also been looking into the problem, by devising scalable DNN verification, optimization and monitoring techniques [1,4,5,[7][8][9]12,19,32,45,46,54,57,60,66,79,85]. Our approach uses a DNN verifier as a backend, and its scalability would improve as these verifiers become more scalable.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Due to the pervasiveness of adversarial inputs [20,27,50,55,56,70,87], the machine learning community has put a great deal of effort into measuring and improving the robustness of networks [14,15,22,29,44,49,58,61,62,78,84]. The formal methods community has also been looking into the problem, by devising scalable DNN verification, optimization and monitoring techniques [1,4,5,[7][8][9]12,19,32,45,46,54,57,60,66,79,85]. Our approach uses a DNN verifier as a backend, and its scalability would improve as these verifiers become more scalable.…”
Section: Related Workmentioning
confidence: 99%
“…Significant progress has recently been made on formal verification techniques for DNNs [1,5,8,9,19,46,57,66,79]. The basic DNN verification query is to determine, given a DNN N , a precondition P , and a postcondition Q, whether there exists an input x such that P (x) and Q(N (x)) both hold.…”
Section: Introductionmentioning
confidence: 99%
“…Safety-critical systems involving LECs like self-driving cars, extensively use data-based techniques, for which we do not have so far a theory allowing behavioural predictability. To favor scalability, some research efforts in the last few years have focused on using dynamic verification techniques such as testing [36,43,46,48,50] and runtime verification [6,19,32,5].…”
Section: Introductionmentioning
confidence: 99%
“…The ensemble nature of Monte-Carlo approaches implies that it is costly to execute. The complementing work on monitoring neuron activation patterns [1], [2], [10], [11] can derive formal guarantees as it uses sound abstraction over the training data set. Both qualitative decisions [1], [2] and quantitative decisions [11] are made possible, and it is also used in assumeguarantee based reasoning for safety verification of highway autonomous driving systems [10].…”
Section: Introductionmentioning
confidence: 99%
“…The complementing work on monitoring neuron activation patterns [1], [2], [10], [11] can derive formal guarantees as it uses sound abstraction over the training data set. Both qualitative decisions [1], [2] and quantitative decisions [11] are made possible, and it is also used in assumeguarantee based reasoning for safety verification of highway autonomous driving systems [10]. Our work complements current results in abstraction-based neuron monitoring and can be integrated into these work with ease; it adds a new dimension of integrating symbolic reasoning inside the monitor construction process, thereby providing robustness guarantees.…”
Section: Introductionmentioning
confidence: 99%