2018 16th ACM/IEEE International Conference on Formal Methods and Models for System Design (MEMOCODE) 2018
DOI: 10.1109/memcod.2018.8556962
|View full text |Cite
|
Sign up to set email alerts
|

Towards Dependability Metrics for Neural Networks

Abstract: Artificial neural networks (NN) are instrumental in realizing highly-automated driving functionality. An overarching challenge is to identify best safety engineering practices for NN and other learning-enabled components. In particular, there is an urgent need for an adequate set of metrics for measuring allimportant NN dependability attributes. We address this challenge by proposing a number of NN-specific and efficiently computable metrics for measuring NN dependability attributes including robustness, inter… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
24
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
2
1

Relationship

2
6

Authors

Journals

citations
Cited by 34 publications
(24 citation statements)
references
References 13 publications
0
24
0
Order By: Relevance
“…Bugs in data affect the quality of the generated model, and can be amplified to yield more serious problems over a period a time [45]. Bug detection in data checks problems such as whether the data is sufficient for training or test a model (also called completeness of the data [46]), whether the data is representative of future data, whether the data contains a lot of noise such as biased labels, whether there is skew between training data and test data [45], and whether there is data poisoning [47] or adversary information that may affect the model's performance. Bug Detection in Frameworks.…”
Section: Testing Componentsmentioning
confidence: 99%
See 2 more Smart Citations
“…Bugs in data affect the quality of the generated model, and can be amplified to yield more serious problems over a period a time [45]. Bug detection in data checks problems such as whether the data is sufficient for training or test a model (also called completeness of the data [46]), whether the data is representative of future data, whether the data contains a lot of noise such as biased labels, whether there is skew between training data and test data [45], and whether there is data poisoning [47] or adversary information that may affect the model's performance. Bug Detection in Frameworks.…”
Section: Testing Componentsmentioning
confidence: 99%
“…Automatic Assessment of Interpretability. Cheng et al [46] presented a metric to understand the behaviours of an ML model. The metric measures whether the learned has actually learned the object in object identification scenario via occluding the surroundings of the objects.…”
Section: Interpretabilitymentioning
confidence: 99%
See 1 more Smart Citation
“…Reluplex is an SMT (satisfiability modulo theories) solver [19] to verify properties of deep NNs or provide counterexamples against them by utilizing the simplex method [18] and the partial linearity of the ReLU function. Dependability metrics set for NNs is a related work that proposes metrics such as scenario coverage, neuron activation pattern, interpretation precision for RICC (robustness, interpretability, completeness, and correctness) criteria [13].…”
Section: Related Work and Research Directions For Verification Of Mamentioning
confidence: 99%
“…Figure 1 provides a simplified Goal Structuring Notation (GSN) [18] diagram to assist in understanding how features provided by nn-dependability-kit contribute to the overall safety goal 1 . Our proposed metrics, unless explicitly specified, are based on extensions of our early work [5]. Starting with the goal of having a neural network to function correctly (G1), based on assumptions where no software and hardware fault appears (A1, A2), the strategy (S1) is to ensure that within different phases of the product life cycle, correctness is ensured.…”
Section: Introductionmentioning
confidence: 99%