2022
DOI: 10.1109/lcsys.2021.3087361
|View full text |Cite
|
Sign up to set email alerts
|

Prediction Error Quantification Through Probabilistic Scaling

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 7 publications
(2 citation statements)
references
References 20 publications
0
2
0
Order By: Relevance
“…The comparison is performed at the operational level to understand if the new data belong to a probability distribution different from that driving the data collection of the training phase. In case of divergence between training and operation, the system must generate an alarm because the performance of the model may no longer conform to what was measured at the training stage (even in case of successfully passed generalization tests 1 ). The problem represents a very important challenge for the secure application of machine learning.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…The comparison is performed at the operational level to understand if the new data belong to a probability distribution different from that driving the data collection of the training phase. In case of divergence between training and operation, the system must generate an alarm because the performance of the model may no longer conform to what was measured at the training stage (even in case of successfully passed generalization tests 1 ). The problem represents a very important challenge for the secure application of machine learning.…”
Section: Introductionmentioning
confidence: 99%
“…The tests of autonomous safety-critical actuations should include all the conditions in the mentioned color gradations, at least by simulation analysis. Although the literature in the field of OoD already poses solutions based on labelled 1 Generalization bounds, see, e.g., [1], concern the gap that exists between the empirical risk, calculated on the data actually available (on which the model is trained) and the theoretical risk, calculated on the distribution of probability that represents the data; this probability distribution is unknown in closed-form and, in the ODD context, represents the "in-distribution". data or through anomaly detection, as evidenced by [7], [8], the OoD according to distributional assumption-free and OoDagnostic criteria is still an open problem 2 .…”
Section: Introductionmentioning
confidence: 99%