2020
DOI: 10.48550/arxiv.2006.11371
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey

Arun Das,
Paul Rad

Abstract: Nowadays, deep neural networks are widely used in mission critical systems such as healthcare, self-driving vehicles, and military which have direct impact on human lives. However, the black-box nature of deep neural networks challenges its use in mission critical applications, raising ethical and judicial concerns inducing lack of trust. Explainable Artificial Intelligence (XAI) is a field of Artificial Intelligence (AI) that promotes a set of tools, techniques, and algorithms that can generate high-quality i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
187
0
1

Year Published

2021
2021
2022
2022

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 177 publications
(238 citation statements)
references
References 61 publications
1
187
0
1
Order By: Relevance
“…In addition to seeking for a similarity index, previous works also explored other ways to interpret DNN representations [7,26,42,43], e.g., visualizing the hidden layer representations [34,45], reasoning explanation [11,14], and gradient-based attribution methods [2,35]. Visualizationbased interpretation [45] aims to explain the model decision by depicting the correlation between input space to the final output.…”
Section: Interpreting Dnn Representationsmentioning
confidence: 99%
“…In addition to seeking for a similarity index, previous works also explored other ways to interpret DNN representations [7,26,42,43], e.g., visualizing the hidden layer representations [34,45], reasoning explanation [11,14], and gradient-based attribution methods [2,35]. Visualizationbased interpretation [45] aims to explain the model decision by depicting the correlation between input space to the final output.…”
Section: Interpreting Dnn Representationsmentioning
confidence: 99%
“…For linear or piecewise linear models, such as linear regression, softmax regression, decision trees, and k-nearest neighbors, their simple classification mechanics make them intrinsically interpretable. Their expressive ability is quite limited so that they cannot achieve mundane classification performance when the features have complex interactions (Molnar 2020;Das and Rad 2020;Linardatos, Papastefanopoulos, and Kotsiantis 2021). Most complex machine learning models are not easily interpretable in themselves.…”
Section: Related Workmentioning
confidence: 99%
“…However, most deep neural networks only implicitly learn and use the patterns, and do not explicitly explain the reasons for which a sample belongs to a class. This causes concerns about applying deep learning in some critical fields, such as healthcare and automatic pilot systems (Choi et al 2016;Molnar 2020;Das and Rad 2020;Linardatos, Papastefanopoulos, and Kotsiantis 2021).…”
Section: Introductionmentioning
confidence: 99%
“…XAI aims to develop and study methodologies for explaining the predictions made by advanced learning machines such as DNNs. Recent advances in XAI have led to a variety of novel methods [20], [21], [22]. These can be grouped into global and local explanation methods.…”
Section: Introductionmentioning
confidence: 99%