Abstract:In the meantime, a wide variety of terminologies, motivations, approaches, and evaluation criteria have been developed within the research field of explainable artificial intelligence (XAI). With the amount of XAI methods vastly growing, a taxonomy of methods is needed by researchers as well as practitioners: To grasp the breadth of the topic, compare methods, and to select the right XAI method based on traits required by a specific use-case context. Many taxonomies for XAI methods of varying level of detail a… Show more
“…This rapid growth has led to inconsistencies in the terminology used to describe such methods, making it difficult to identify relevant studies. Although many reviews on IML introduce taxonomies that bring clarity to the different methods, 16 there is still inconsistency across research papers when incorporating explanation methods in their analysis. In dementia studies specifically, coupled with the variety of data available for differential diagnosis and prognosis, this has led to a complex landscape of methods that makes it hard to identify best practice.…”
Section: Introductionmentioning
confidence: 99%
“…Details on these methods and their properties can be found in resources such as Christoph Molnar's guide 17 . Recent reviews of interpretable machine learning have introduced frameworks (taxonomies) that summarize their properties, provide a visual aid, and promote consistency across future work 13,16,18 …”
Introduction:Machine learning research into automated dementia diagnosis is becoming increasingly popular but so far has had limited clinical impact. A key challenge is building robust and generalizable models that generate decisions that can be reliably explained. Some models are designed to be inherently "interpretable," whereas post hoc "explainability" methods can be used for other models.Methods: Here we sought to summarize the state-of-the-art of interpretable machine learning for dementia.
Results:We identified 92 studies using PubMed, Web of Science, and Scopus. Studies demonstrate promising classification performance but vary in their validation procedures and reporting standards and rely heavily on popular data sets.Discussion: Future work should incorporate clinicians to validate explanation methods and make conclusive inferences about dementia-related disease pathology. Critically analyzing model explanations also requires an understanding of the interpretability methods itself. Patient-specific explanations are also required to demonstrate the benefit of interpretable machine learning in clinical practice.
“…This rapid growth has led to inconsistencies in the terminology used to describe such methods, making it difficult to identify relevant studies. Although many reviews on IML introduce taxonomies that bring clarity to the different methods, 16 there is still inconsistency across research papers when incorporating explanation methods in their analysis. In dementia studies specifically, coupled with the variety of data available for differential diagnosis and prognosis, this has led to a complex landscape of methods that makes it hard to identify best practice.…”
Section: Introductionmentioning
confidence: 99%
“…Details on these methods and their properties can be found in resources such as Christoph Molnar's guide 17 . Recent reviews of interpretable machine learning have introduced frameworks (taxonomies) that summarize their properties, provide a visual aid, and promote consistency across future work 13,16,18 …”
Introduction:Machine learning research into automated dementia diagnosis is becoming increasingly popular but so far has had limited clinical impact. A key challenge is building robust and generalizable models that generate decisions that can be reliably explained. Some models are designed to be inherently "interpretable," whereas post hoc "explainability" methods can be used for other models.Methods: Here we sought to summarize the state-of-the-art of interpretable machine learning for dementia.
Results:We identified 92 studies using PubMed, Web of Science, and Scopus. Studies demonstrate promising classification performance but vary in their validation procedures and reporting standards and rely heavily on popular data sets.Discussion: Future work should incorporate clinicians to validate explanation methods and make conclusive inferences about dementia-related disease pathology. Critically analyzing model explanations also requires an understanding of the interpretability methods itself. Patient-specific explanations are also required to demonstrate the benefit of interpretable machine learning in clinical practice.
“…Even when this contribution is not oriented to the chemistry domain, it was useful because graph‐based representations are highly relevant for studying molecules and, for this reason, these kind of XAI methods are one of the most applied in drug discovery. Additionally, other general studies about XAI have been valuable contributions for discussing, refining and improving our proposed taxonomy 7,10,14,21 . Our main goal is deliver an easy to read and intuitive explanation for each family of XAI methods oriented to the general research community on medical chemistry, trying to avoid excessive mathematical or algorithmic descriptions that could require a strongest background in deep learning issues and technologies.…”
Section: Taxonomy Of Xai Methods Proposed On Drug Discoverymentioning
Artificial intelligence (AI) is having a growing impact in many areas related to drug discovery. However, it is still critical for their adoption by the medicinal chemistry community to achieve models that, in addition to achieving high performance in their predictions, can be trusty explained to the end users in terms of their knowledge and background. Therefore, the investigation and development of explainable artificial intelligence (XAI) methods have become a key topic to address this challenge. For this reason, a comprehensive literature review about explanation methodologies for AI based models, focused in the field of drug discovery, is provided. In particular, an intuitive overview about each family of XAI approaches, such as those based on feature attribution, graph topologies, or counterfactual reasoning, oriented to a wide audience without a strong background in the AI discipline is introduced. As the main contribution, we propose a new taxonomy of the current XAI methods, which take into account specific issues related with the typical representations and computational problems study in the design of molecules. Additionally, we also present the main visualization strategies designed for supporting XAI approaches in the chemical domain. We conclude with key ideas about each method category, thoroughly providing insightful analysis about the guidelines and potential benefits of their adoption in medical chemistry.This article is categorized under:
Data Science > Artificial Intelligence/Machine Learning
“…We focused on reviewing basic concepts in XAI and transferred them to explainable hardware, see Section 5.2. However, there are other advanced concepts and classifications that are discussed in XAI literature, e. g., in the work of Schwalbe and Finzel [91], Sokol and Flach [94], and Speith [96]. Future research could further evaluate and adapt existing XAI research to develop new explainability approaches.…”
The increasing opaqueness of Artificial Intelligence (AI) and its growing influence on our digital society highlight the necessity for AI-based systems that are trustworthy, accountable, and fair. Previous research emphasizes explainability as a means to achieve these properties. In this paper, we argue that system explainability cannot be achieved without accounting for the underlying hardware on which all digital systems-including AI applications-are realized. As a remedy, we propose the concept of explainable hardware, and focus on chips-which are particularly relevant to current geopolitical discussions on (trustworthy) semiconductors. Inspired by previous work on Explainable AI (XAI), we develop a hardware explainability framework by identifying relevant stakeholders, unifying existing approaches form hardware manufacturing under the notion of explainability, and discussing their usefulness to satisfy different stakeholders' needs. Our work lays the foundation for future work and structured debates on explainable hardware. CCS Concepts: • Hardware → Integrated circuits; • Security and privacy → Human and societal aspects of security and privacy; • Computing methodologies → Philosophical/theoretical foundations of artificial intelligence.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.