2020
DOI: 10.1609/aaai.v34i04.6071
|View full text |Cite
|
Sign up to set email alerts
|

Justification-Based Reliability in Machine Learning

Abstract: With the advent of Deep Learning, the field of machine learning (ML) has surpassed human-level performance on diverse classification tasks. At the same time, there is a stark need to characterize and quantify reliability of a model's prediction on individual samples. This is especially true in applications of such models in safety-critical domains of industrial control and healthcare. To address this need, we link the question of reliability of a model's individual prediction to the epistemic uncertainty of th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(12 citation statements)
references
References 7 publications
(7 reference statements)
0
7
0
Order By: Relevance
“…We present outcomes for the 4 models using a Class Confusion Matrix (CM) for the 8-class problem, that is typically used to indicate prediction performance of a classifier. For the epistemic models, (Virani et al, 2020) introduced an Augmented Confusion Matrix, a variant of CM, which splits it into 3 submatrices, each of which is a confusion matrix, but separately deals with predictions that have been assigned different epistemic statuses. In other words, the I-Know predictions are grouped within a dedicated confusion matrix showing performance of the classifier for the cases where the model's epistemic uncertainty is minimal and the predictions are highly confident.…”
Section: Resultsmentioning
confidence: 99%
See 4 more Smart Citations
“…We present outcomes for the 4 models using a Class Confusion Matrix (CM) for the 8-class problem, that is typically used to indicate prediction performance of a classifier. For the epistemic models, (Virani et al, 2020) introduced an Augmented Confusion Matrix, a variant of CM, which splits it into 3 submatrices, each of which is a confusion matrix, but separately deals with predictions that have been assigned different epistemic statuses. In other words, the I-Know predictions are grouped within a dedicated confusion matrix showing performance of the classifier for the cases where the model's epistemic uncertainty is minimal and the predictions are highly confident.…”
Section: Resultsmentioning
confidence: 99%
“…Epistemic classification is an approach, within the Humble AI initiative, inspired from the theory of Justified True Belief (JTB) in Epistemology ( (Steup, 2007) is a good exposition), which aimed to study the limits and validity of human-acquired knowledge. We extend the same concept to understanding and characterizing the validity and limits of knowledge as acquired by supervised classifiers, as detailed in (Virani et al, 2020). We showed that the JTB analysis can be leveraged to expose the uncertainty of a classification model with respect to its inference due to ambiguity or extrapolation, thereby allowing for the inference to be only as strong as the justification permits.…”
Section: Epistemic Classificationmentioning
confidence: 95%
See 3 more Smart Citations