Explainable AI: roles and stakeholders, desirements and challenges
Robert R. Hoffman,
Shane T. Mueller,
Gary Klein
et al.
Abstract:IntroductionThe purpose of the Stakeholder Playbook is to enable the developers of explainable AI systems to take into account the different ways in which different stakeholders or role-holders need to “look inside” the AI/XAI systems.MethodWe conducted structured cognitive interviews with senior and mid-career professionals who had direct experience either developing or using AI and/or autonomous systems.ResultsThe results show that role-holders need access to others (e.g., trusted engineers and trusted vendo… Show more
“…(3) Using abstractions to simplify explanations: High-level patterns are the basis for describing a big plan's little steps. Automating the discovery of abstractions has long been a challenge and understanding the discovery and sharing of abstractions in learning and explanation are at the frontier of XAI research today (Kuppa and Le-Khac, 2020;Hoffman et al, 2018).…”
PurposeExplainable artificial intelligence (XAI) has importance in several industrial applications. The study aims to provide a comparison of two important methods used for explainable AI algorithms.Design/methodology/approachIn this study multiple criteria has been used to compare between explainable Ranked Area Integrals (xRAI) and integrated gradient (IG) methods for the explainability of AI algorithms, based on a multimethod phase-wise analysis research design.FindingsThe theoretical part includes the comparison of frameworks of two methods. In contrast, the methods have been compared across five dimensions like functional, operational, usability, safety and validation, from a practical point of view.Research limitations/implicationsA comparison has been made by combining criteria from theoretical and practical points of view, which demonstrates tradeoffs in terms of choices for the user.Originality/valueOur results show that the xRAI method performs better from a theoretical point of view. However, the IG method shows a good result with both model accuracy and prediction quality.
“…(3) Using abstractions to simplify explanations: High-level patterns are the basis for describing a big plan's little steps. Automating the discovery of abstractions has long been a challenge and understanding the discovery and sharing of abstractions in learning and explanation are at the frontier of XAI research today (Kuppa and Le-Khac, 2020;Hoffman et al, 2018).…”
PurposeExplainable artificial intelligence (XAI) has importance in several industrial applications. The study aims to provide a comparison of two important methods used for explainable AI algorithms.Design/methodology/approachIn this study multiple criteria has been used to compare between explainable Ranked Area Integrals (xRAI) and integrated gradient (IG) methods for the explainability of AI algorithms, based on a multimethod phase-wise analysis research design.FindingsThe theoretical part includes the comparison of frameworks of two methods. In contrast, the methods have been compared across five dimensions like functional, operational, usability, safety and validation, from a practical point of view.Research limitations/implicationsA comparison has been made by combining criteria from theoretical and practical points of view, which demonstrates tradeoffs in terms of choices for the user.Originality/valueOur results show that the xRAI method performs better from a theoretical point of view. However, the IG method shows a good result with both model accuracy and prediction quality.
Researchers focusing on how artificial intelligence (AI) methods explain their decisions often discuss controversies and limitations. Some even assert that most publications offer little to no valuable contributions. In this article, we substantiate the claim that explainable AI (XAI) is in trouble by describing and illustrating four problems: the disagreements on the scope of XAI, the lack of definitional cohesion, precision, and adoption, the issues with motivations for XAI research, and limited and inconsistent evaluations. As we delve into their potential underlying sources, our analysis finds these problems seem to originate from AI researchers succumbing to the pitfalls of interdisciplinarity or from insufficient scientific rigor. Analyzing these potential factors, we discuss the literature at times coming across unexplored research questions. Hoping to alleviate existing problems, we make recommendations on precautions against the challenges of interdisciplinarity and propose directions in support of scientific rigor.
This paper summarizes the psychological insights and related design challenges that have emerged in the field of Explainable AI (XAI). This summary is organized as a set of principles, some of which have recently been instantiated in XAI research. The primary aspects of implementation to which the principles refer are the design and evaluation stages of XAI system development, that is, principles concerning the design of explanations and the design of experiments for evaluating the performance of XAI systems. The principles can serve as guidance, to ensure that AI systems are human-centered and effectively assist people in solving difficult problems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.