2021
DOI: 10.1002/widm.1424
|View full text |Cite
|
Sign up to set email alerts
|

Explainable artificial intelligence: an analytical review

Abstract: This paper provides a brief analytical review of the current state-of-the-art in relation to the explainability of artificial intelligence in the context of recent advances in machine learning and deep learning. The paper starts with a brief historical introduction and a taxonomy, and formulates the main challenges in terms of explainability building on the recently formulated National Institute of Standards four principles of explainability. Recently published methods related to the topic are then critically … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
115
0
3

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 326 publications
(160 citation statements)
references
References 60 publications
1
115
0
3
Order By: Relevance
“…Machine learning is increasingly gaining momentum in criminology and criminal justice (Brennan and Oliver, 2013;Berk, 2013;Campedelli, 2021). In response to lively debates regarding the "black box" nature of predictions and recommendations offered by machine learning algorithms in high-stakes applications, including those related to policing, criminal justice, and healthcare, scholars in Artificial Intelligence and computer science have recently proposed several approaches to increase model interpretability, fairness and accountability (Holzinger et al, 2017;Rudin, 2019;Gunning et al, 2019;Angelov et al, 2021). This work leverages such advances, combining predictions with explainability on both analytical levels.…”
Section: Analytical Strategymentioning
confidence: 99%
“…Machine learning is increasingly gaining momentum in criminology and criminal justice (Brennan and Oliver, 2013;Berk, 2013;Campedelli, 2021). In response to lively debates regarding the "black box" nature of predictions and recommendations offered by machine learning algorithms in high-stakes applications, including those related to policing, criminal justice, and healthcare, scholars in Artificial Intelligence and computer science have recently proposed several approaches to increase model interpretability, fairness and accountability (Holzinger et al, 2017;Rudin, 2019;Gunning et al, 2019;Angelov et al, 2021). This work leverages such advances, combining predictions with explainability on both analytical levels.…”
Section: Analytical Strategymentioning
confidence: 99%
“…Yet with millions or billions of learnable parameters many of these technologies often leave little room for explaining how the algorithmic ‘oracle’ draws its inferences 36 37. It is interesting to reflect on how global human health decision-makers have greater faith in inductive bias of artificially intelligent ‘black-boxes’ of limited explainability38 39 than ‘explainably intelligent’ fellow humans with inductive biases and explainable epistemes fine-tuned over millennia of evolution-in-context.…”
Section: Transactions Across a Power Differentialmentioning
confidence: 99%
“…Although these tools had been greatly successful in various biological topics, biologists are still curious about how a machine learning model makes decision, and which features of the input data play important roles in the model output. To answer these questions, explainable artificial intelligence (XAI) programs have recently emerged to enable the development of models that can be understood by humans [10,11].…”
Section: Introductionmentioning
confidence: 99%
“…Although these tools were greatly successful in various biological topics, biologists are still curious about how a machine learning model makes decision, and which features of the input data play important roles in the model output. To solve this issue, a new approach to artificial intelligence, explainable artificial intelligence (XAI), has recently emerged with the aim of encouraging the development of methods that generate models that can be understood by humans [10, 11]. These methods related to XAI were quickly applied to interpret machine learning models obtained from biological data [12, 13].…”
Section: Introductionmentioning
confidence: 99%