The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2023
DOI: 10.1007/s10618-022-00867-8
|View full text |Cite
|
Sign up to set email alerts
|

A comprehensive taxonomy for explainable artificial intelligence: a systematic survey of surveys on methods and concepts

Abstract: In the meantime, a wide variety of terminologies, motivations, approaches, and evaluation criteria have been developed within the research field of explainable artificial intelligence (XAI). With the amount of XAI methods vastly growing, a taxonomy of methods is needed by researchers as well as practitioners: To grasp the breadth of the topic, compare methods, and to select the right XAI method based on traits required by a specific use-case context. Many taxonomies for XAI methods of varying level of detail a… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
26
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 86 publications
(40 citation statements)
references
References 130 publications
0
26
0
Order By: Relevance
“…This rapid growth has led to inconsistencies in the terminology used to describe such methods, making it difficult to identify relevant studies. Although many reviews on IML introduce taxonomies that bring clarity to the different methods, 16 there is still inconsistency across research papers when incorporating explanation methods in their analysis. In dementia studies specifically, coupled with the variety of data available for differential diagnosis and prognosis, this has led to a complex landscape of methods that makes it hard to identify best practice.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…This rapid growth has led to inconsistencies in the terminology used to describe such methods, making it difficult to identify relevant studies. Although many reviews on IML introduce taxonomies that bring clarity to the different methods, 16 there is still inconsistency across research papers when incorporating explanation methods in their analysis. In dementia studies specifically, coupled with the variety of data available for differential diagnosis and prognosis, this has led to a complex landscape of methods that makes it hard to identify best practice.…”
Section: Introductionmentioning
confidence: 99%
“…Details on these methods and their properties can be found in resources such as Christoph Molnar's guide 17 . Recent reviews of interpretable machine learning have introduced frameworks (taxonomies) that summarize their properties, provide a visual aid, and promote consistency across future work 13,16,18 …”
Section: Introductionmentioning
confidence: 99%
“…Even when this contribution is not oriented to the chemistry domain, it was useful because graph‐based representations are highly relevant for studying molecules and, for this reason, these kind of XAI methods are one of the most applied in drug discovery. Additionally, other general studies about XAI have been valuable contributions for discussing, refining and improving our proposed taxonomy 7,10,14,21 . Our main goal is deliver an easy to read and intuitive explanation for each family of XAI methods oriented to the general research community on medical chemistry, trying to avoid excessive mathematical or algorithmic descriptions that could require a strongest background in deep learning issues and technologies.…”
Section: Taxonomy Of Xai Methods Proposed On Drug Discoverymentioning
confidence: 99%
“…We focused on reviewing basic concepts in XAI and transferred them to explainable hardware, see Section 5.2. However, there are other advanced concepts and classifications that are discussed in XAI literature, e. g., in the work of Schwalbe and Finzel [91], Sokol and Flach [94], and Speith [96]. Future research could further evaluate and adapt existing XAI research to develop new explainability approaches.…”
Section: Limitations and Future Workmentioning
confidence: 99%