“…To get an overview of currently available taxonomies, we reviewed eleven papers from the last three years (2019-2021) referencing, containing, or proposing taxonomies of explainability methods: [6,10,14,30,31,44,49,54,63,65,72]. While this is by no means a systematic review, we focused on representative papers in the field.…”
Section: Current Taxonomies Of Explainability Methodsmentioning
confidence: 99%
“…6 In general, dimensions are mostly independent of each other (except for applicability, which only applies to post-hoc methods), though some can be combined more easily with each other (e.g., a global scope makes more sense for ante-hoc methods). The taxonomies proposed in [30,31,44,65,72] adhere to this approach.…”
The recent surge in publications related to explainable artificial intelligence (XAI) has led to an almost insurmountable wall if one wants to get started or stay up to date with XAI. For this reason, articles and reviews that present taxonomies of XAI methods seem to be a welcomed way to get an overview of the field. Building on this idea, there is currently a trend of producing such taxonomies, leading to several competing approaches to construct them. In this paper, we will review recent approaches to constructing taxonomies of XAI methods and discuss general challenges concerning them as well as their individual advantages and limitations. Our review is intended to help scholars be aware of challenges current taxonomies face. As we will argue, when charting the field of XAI, it may not be sufficient to rely on one of the approaches we found. To amend this problem, we will propose and discuss three possible solutions: a new taxonomy that incorporates the reviewed ones, a database of XAI methods, and a decision tree to help choose fitting methods.
CCS CONCEPTS• General and reference → Surveys and overviews; • Computing methodologies → Artificial intelligence.
“…To get an overview of currently available taxonomies, we reviewed eleven papers from the last three years (2019-2021) referencing, containing, or proposing taxonomies of explainability methods: [6,10,14,30,31,44,49,54,63,65,72]. While this is by no means a systematic review, we focused on representative papers in the field.…”
Section: Current Taxonomies Of Explainability Methodsmentioning
confidence: 99%
“…6 In general, dimensions are mostly independent of each other (except for applicability, which only applies to post-hoc methods), though some can be combined more easily with each other (e.g., a global scope makes more sense for ante-hoc methods). The taxonomies proposed in [30,31,44,65,72] adhere to this approach.…”
The recent surge in publications related to explainable artificial intelligence (XAI) has led to an almost insurmountable wall if one wants to get started or stay up to date with XAI. For this reason, articles and reviews that present taxonomies of XAI methods seem to be a welcomed way to get an overview of the field. Building on this idea, there is currently a trend of producing such taxonomies, leading to several competing approaches to construct them. In this paper, we will review recent approaches to constructing taxonomies of XAI methods and discuss general challenges concerning them as well as their individual advantages and limitations. Our review is intended to help scholars be aware of challenges current taxonomies face. As we will argue, when charting the field of XAI, it may not be sufficient to rely on one of the approaches we found. To amend this problem, we will propose and discuss three possible solutions: a new taxonomy that incorporates the reviewed ones, a database of XAI methods, and a decision tree to help choose fitting methods.
CCS CONCEPTS• General and reference → Surveys and overviews; • Computing methodologies → Artificial intelligence.
“…This resulted in a detailed meta-study on state-of-the-literature, and a detailed list of XAI method aspects and metrics (chapters 7 and 8). More recently, Guidotti et al (2021) illustrate some key dimensions to distinguish XAI approaches in a beginner-friendly book section. They present a broad collection of most common explanation types and state-of-the-art explanators respectively, and discuss their usability and applicability.…”
Section: Broad Conceptual Surveysmentioning
confidence: 99%
“…Counterfactual examples are sometimes seen as a special case of more general contrastive examples (Stepin et al 2021). Desirables associated specifically with counterfactual examples are that they are valid inputs close to the original examples and with few features changed (sparsity) that are actionable for the explainee and that they adhere to known causal relations (Guidotti et al 2021;Verma et al 2020;Keane et al 2021).…”
Section: Contrastive / Counterfactual / Near Miss Examples Including ...mentioning
In the meantime, a wide variety of terminologies, motivations, approaches, and evaluation criteria have been developed within the research field of explainable artificial intelligence (XAI). With the amount of XAI methods vastly growing, a taxonomy of methods is needed by researchers as well as practitioners: To grasp the breadth of the topic, compare methods, and to select the right XAI method based on traits required by a specific use-case context. Many taxonomies for XAI methods of varying level of detail and depth can be found in the literature. While they often have a different focus, they also exhibit many points of overlap. This paper unifies these efforts and provides a complete taxonomy of XAI methods with respect to notions present in the current state of research. In a structured literature analysis and meta-study, we identified and reviewed more than 50 of the most cited and current surveys on XAI methods, metrics, and method traits. After summarizing them in a survey of surveys, we merge terminologies and concepts of the articles into a unified structured taxonomy. Single concepts therein are illustrated by more than 50 diverse selected example methods in total, which we categorize accordingly. The taxonomy may serve both beginners, researchers, and practitioners as a reference and wide-ranging overview of XAI method traits and aspects. Hence, it provides foundations for targeted, use-case-oriented, and context-sensitive future research.
“…Furthermore, explainability approaches for AI are distinguished by their applicability. Approaches that are specific only to certain kinds of AI models are referred to as model-specific while those that apply to AI models more generally, independent of their internal architecture, are called model-agnostic [38,39,58,89,94,96]. In XAI, this differentiation is only applied to post-hoc explainability approaches.…”
Section: Transferring Basic Concepts From Xai To Hardware Explainabilitymentioning
The increasing opaqueness of Artificial Intelligence (AI) and its growing influence on our digital society highlight the necessity for AI-based systems that are trustworthy, accountable, and fair. Previous research emphasizes explainability as a means to achieve these properties. In this paper, we argue that system explainability cannot be achieved without accounting for the underlying hardware on which all digital systems-including AI applications-are realized. As a remedy, we propose the concept of explainable hardware, and focus on chips-which are particularly relevant to current geopolitical discussions on (trustworthy) semiconductors. Inspired by previous work on Explainable AI (XAI), we develop a hardware explainability framework by identifying relevant stakeholders, unifying existing approaches form hardware manufacturing under the notion of explainability, and discussing their usefulness to satisfy different stakeholders' needs. Our work lays the foundation for future work and structured debates on explainable hardware. CCS Concepts: • Hardware → Integrated circuits; • Security and privacy → Human and societal aspects of security and privacy; • Computing methodologies → Philosophical/theoretical foundations of artificial intelligence.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.