2021
DOI: 10.4018/ijaiml.289536
|View full text |Cite
|
Sign up to set email alerts
|

An Integrated Process for Verifying Deep Learning Classifiers Using Dataset Dissimilarity Measures

Abstract: The specification and verification of algorithms is vital for safety-critical autonomous systems which incorporate deep learning elements. We propose an integrated process for verifying artificial neural network (ANN) classifiers. This process consists of an off-line verification and an on-line performance prediction phase. The process is intended to verify ANN classifier generalisation performance, and to this end makes use of dataset dissimilarity measures. We introduce a novel measure for quantifying the di… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(8 citation statements)
references
References 22 publications
(32 reference statements)
0
8
0
Order By: Relevance
“…The UQ method also returns an estimate of the uncertainty interval associated with a performance prediction. The proposed UQ method is an extension of the technique discussed in [7,47], where the authors introduce the idea of studying the relationship between ANN classifier performance and data dissimilarity in the context of ML verification. The degree of shift is to be gauged by data dissimilarity measures.…”
Section: A Uq Methods For Model Performance Prediction Based On Data ...mentioning
confidence: 99%
See 1 more Smart Citation
“…The UQ method also returns an estimate of the uncertainty interval associated with a performance prediction. The proposed UQ method is an extension of the technique discussed in [7,47], where the authors introduce the idea of studying the relationship between ANN classifier performance and data dissimilarity in the context of ML verification. The degree of shift is to be gauged by data dissimilarity measures.…”
Section: A Uq Methods For Model Performance Prediction Based On Data ...mentioning
confidence: 99%
“…The accuracy of ML models tends to fall when used on data that are statistically different from their training data [4,7]. The term in-distribution is used to describe data which are drawn from the training data-generating distribution (i.e., the probability distribution from which training samples are drawn); out-of-distribution data are not drawn from the training data-generating distribution.…”
Section: Introductionmentioning
confidence: 99%
“…We trained our detectors with RMSprop (learning rate 5 × 10 −4 , weight decay 10 −5 ), for 30 epochs (ImageNet) or 200 epochs (all other ID datasets). As a comparison, we also implemented a classbased FNRD [9], extracting neuron activation values at different layers of the classifier.…”
Section: Methodsmentioning
confidence: 99%
“…[6] proposes two related methods: MaxLogit -based on the maximum logit value -and KL-Matching which measures the KL divergence between the output of the model and the class-conditional mean softmax values. The Fractional Neuron Region Distance [9] (FNRD) computes the range of activations for each neuron over the training set in order to empirically characterise the statistical properties of these activations, then provides a score describing how many neuron outputs are outside the corresponding range boundaries for a given input. Similarly, for each layer in the model, [18] computes the range of pairwise feature correlation between channels across the training set.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation