2019
DOI: 10.3390/electronics8080832
|View full text |Cite
|
Sign up to set email alerts
|

Machine Learning Interpretability: A Survey on Methods and Metrics

Abstract: Machine learning systems are becoming increasingly ubiquitous. These systems’s adoption has been expanding, accelerating the shift towards a more algorithmic society, meaning that algorithmically informed decisions have greater potential for significant social impact. However, most of these accurate decision support systems remain complex black boxes, meaning their internal logic and inner workings are hidden to the user and even experts cannot fully understand the rationale behind their predictions. Moreover,… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
459
0
8

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 878 publications
(469 citation statements)
references
References 52 publications
2
459
0
8
Order By: Relevance
“…Explainable Artificial Intelligence is a rapidly growing research discipline [1,4,11,22]. The quest for explainability has its roots in the growing adoption of high-performance "black-box" AI models, which spurs public concerns about the safety and ethical usage of AI.…”
Section: Explainability For Trust Calibrationmentioning
confidence: 99%
See 1 more Smart Citation
“…Explainable Artificial Intelligence is a rapidly growing research discipline [1,4,11,22]. The quest for explainability has its roots in the growing adoption of high-performance "black-box" AI models, which spurs public concerns about the safety and ethical usage of AI.…”
Section: Explainability For Trust Calibrationmentioning
confidence: 99%
“…To improve people's distrust in ML models, many considered the importance of transparency by providing explanations for the ML model [4,9,28]. In particular, local explanations that explain the rationale for a single prediction (in contrast to global explanations describing the overall logic of the model) are recommended to help people judge whether to trust a model on a case-by-case basis [28].…”
Section: Introductionmentioning
confidence: 99%
“…Meanwhile, predictive model interpretability concerns the understanding of model decisions by humans. Interpretability methods can be categorized into three types: explain data, build an inherently interpretable model (in modeling), and allow to explain it after building the models [ 43 ]. In practice, there have been some needs for using machine learning models to ensure which factors are used to make key decisions with boosted trees [ 44 ].…”
Section: Resultsmentioning
confidence: 99%
“…In our case, this means using all words and sentences in discharge summary notes provides us with a mechanism for categorizing disorders, however, without pointing to individual words as the main discriminators. In order to compensate for this shortcoming we performed token selection (for explainable AI this is a simple mechanism to obtain better interpretations for models 60 ). Interestingly, we found that, independent of the disorder, a substantial amount of tokens can be removed without a deteriorating classification performance (see Fig.…”
Section: Discussionmentioning
confidence: 99%