2021
DOI: 10.3390/make3040045
|View full text |Cite
|
Sign up to set email alerts
|

A Multi-Component Framework for the Analysis and Design of Explainable Artificial Intelligence

Abstract: The rapid growth of research in explainable artificial intelligence (XAI) follows on two substantial developments. First, the enormous application success of modern machine learning methods, especially deep and reinforcement learning, have created high expectations for industrial, commercial, and social value. Second, the emerging and growing concern for creating ethical and trusted AI systems, including compliance with regulatory principles to ensure transparency and trust. These two threads have created a ki… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
4
1

Relationship

2
8

Authors

Journals

citations
Cited by 28 publications
(23 citation statements)
references
References 34 publications
0
14
0
Order By: Relevance
“…As a consequence, we conclude that heatmaps are a good start but remain one of the weaker XAI methods, as they are not semantically driven and can only provide low-level, post-hoc explanations 37 . Other methods such as gradient-based saliency maps, Class Activation Mapping, and Excitation Backpropagation can all be considered in future work 38,39 .…”
Section: Discussionmentioning
confidence: 92%
“…As a consequence, we conclude that heatmaps are a good start but remain one of the weaker XAI methods, as they are not semantically driven and can only provide low-level, post-hoc explanations 37 . Other methods such as gradient-based saliency maps, Class Activation Mapping, and Excitation Backpropagation can all be considered in future work 38,39 .…”
Section: Discussionmentioning
confidence: 92%
“…Although deep learning models currently have the ability to conduct language processing tasks such as tagging, text classification, machine translation, and question answering, existing state-of-the-art models are criticized for lacking explainability —more specifically, being able to describe how the algorithm came to a particular result or action, which is considered a key pillar in discourse around ethical AI development [ 103 - 105 ]. This and future studies must seek to improve the methods of explainable NLP.…”
Section: Discussionmentioning
confidence: 99%
“…It enhances the social acceptance of the decisions made by the system, with a proper explanation of the decision being made. Implementing such a multicomponent framework for XAI system analysis imparts deeper trust in the system [61]. • Visual analytics: With interaction representations of the decisions made by the XAI systems, the incorporation of visual analytics makes humans understand the decisionmaking process of AI systems with much ease.…”
Section: A Explainable Aimentioning
confidence: 99%