2022
DOI: 10.48550/arxiv.2201.11676
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Monitoring Model Deterioration with Explainable Uncertainty Estimation via Non-parametric Bootstrap

Abstract: Monitoring machine learning models once they are deployed is challenging. It is even more challenging to decide when to retrain models in realcase scenarios when labeled data is beyond reach, and monitoring performance metrics becomes unfeasible. In this work, we use non-parametric bootstrapped uncertainty estimates and SHAP values to provide explainable uncertainty estimation as a technique that aims to monitor the deterioration of machine learning models in deployment environments, as well as determine the s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 14 publications
(16 reference statements)
0
1
0
Order By: Relevance
“…A complementary interesting research angle would be explanations of model uncertainty-for instance to debug models or clean data. For the sake of clarity, we did not include this research and refer the interested reader to, e.g., [4][5][6][7].…”
Section: Introductionmentioning
confidence: 99%
“…A complementary interesting research angle would be explanations of model uncertainty-for instance to debug models or clean data. For the sake of clarity, we did not include this research and refer the interested reader to, e.g., [4][5][6][7].…”
Section: Introductionmentioning
confidence: 99%