2020
DOI: 10.1007/978-3-030-58920-2_13
|View full text |Cite
|
Sign up to set email alerts
|

SafeML: Safety Monitoring of Machine Learning Classifiers Through Statistical Difference Measures

Abstract: Ensuring safety and explainability of machine learning (ML) is a topic of increasing relevance as data-driven applications venture into safety-critical application domains, traditionally committed to high safety standards that are not satisfied with an exclusive testing approach of otherwise inaccessible black-box systems. Especially the interaction between safety and security is a central challenge, as security violations can lead to compromised safety. The contribution of this paper to addressing both safety… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 29 publications
(17 citation statements)
references
References 16 publications
(20 reference statements)
0
8
0
Order By: Relevance
“…To demonstrate the effectiveness of our analysis method in more detail, we present the plots of error vs D i for individual data points (figure 4) and plots of predicted versus actual values (figure 5) with different deciles shown in different colors for the TCO formation and band gap energy datasets from [19] and bike sharing [15] and game action [29] datasets. It is clear that the data in deciles 1-2 corresponding to small distances D i are predicted accurately, with the data points for these deciles shown in red falling on the y = x line in the plots in figure 4.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…To demonstrate the effectiveness of our analysis method in more detail, we present the plots of error vs D i for individual data points (figure 4) and plots of predicted versus actual values (figure 5) with different deciles shown in different colors for the TCO formation and band gap energy datasets from [19] and bike sharing [15] and game action [29] datasets. It is clear that the data in deciles 1-2 corresponding to small distances D i are predicted accurately, with the data points for these deciles shown in red falling on the y = x line in the plots in figure 4.…”
Section: Resultsmentioning
confidence: 99%
“…However, despite some successes, it was found to be only a moderately effective UQ tool for several chemistry datasets [1][2][3][4][5][6][7][8][9][10][11][12]. Thus, while numerous studies have been performed probing the relationships between predictability and domains of applicability, feature space and pointwise distance [1][2][3][4][5][6][7][8][14][15][16][17][18][19] in some cases using complex models, a generally reliable method for predicting the errors of ML models based on distance in feature space is still unavailable.…”
Section: Introductionmentioning
confidence: 99%
“…Besides gathering general statistics about the target data distribution, one may want to regularly draw random samples from the target distribution and re-evaluate the model accordingly. Aslansefat, Sorokos, Whiting, et al [241] have recently proposed a formal method for monitoring and quantifying distribution shift, potentially serving to recognize when a model is used outside of its safe application area. To detect and quantify label shift specifically, Lipton, Wang, and Smola [243] have proposed Black Box Shift Estimation (BBSE).…”
Section: E: Model Deployment and Monitoringmentioning
confidence: 99%
“…Several methods are used for evaluating the safety and robustness of deep learning-based systems in [61]- [64]. We also evaluated the applicability of safety-security monitoring based on significant difference measures of our systems by SafeML in [62]. Table 2 shows the difference between various distance measures for the dataset used in YOLO.…”
Section: ) Online Safety Zone Estimation For Moving Equipmentmentioning
confidence: 99%