Explainable artificial intelligence (AI) is attracting much interest in medicine. Technically, the problem of explainability is as old as AI itself and classic AI represented comprehensible retraceable approaches. However, their weakness was in dealing with uncertainties of the real world. Through the introduction of probabilistic learning, applications became increasingly successful, but increasingly opaque. Explainable AI deals with the implementation of transparency and traceability of statistical black‐box machine learning methods, particularly deep learning (DL). We argue that there is a need to go beyond explainable AI. To reach a level of explainable medicine we need causability. In the same way that usability encompasses measurements for the quality of use, causability encompasses measurements for the quality of explanations. In this article, we provide some necessary definitions to discriminate between explainability and causability as well as a use‐case of DL interpretation and of human explanation in histopathology. The main contribution of this article is the notion of causability, which is differentiated from explainability in that causability is a property of a person, while explainability is a property of a system This article is categorized under: Fundamental Concepts of Data and Knowledge > Human Centricity and User Interaction
The Inorganic Crystal Structure Database (ICSD) is the world's largest database of fully evaluated and published crystal structure data, mostly obtained from experimental results. However, the purely experimental approach is no longer the only route to discover new compounds and structures. In the past few decades, numerous computational methods for simulating and predicting structures of inorganic solids have emerged, creating large numbers of theoretical crystal data. In order to take account of these new developments the scope of the ICSD was extended in 2017 to include theoretical structures which are published in peer-reviewed journals. Each theoretical structure has been carefully evaluated, and the resulting CIF has been extended and standardized. Furthermore, a first classification of theoretical data in the ICSD is presented, including additional categories used for comparison of experimental and theoretical information.
BACKGROUND AND PURPOSE Oxidative stress [i.e. increased levels of reactive oxygen species (ROS)] has been suggested as a pathomechanism of different diseases, although the disease‐relevant sources of ROS remain to be identified. One of these sources may be NADPH oxidases. However, due to increasing concerns about the specificity of the compounds commonly used as NADPH oxidase inhibitors, data obtained with these compounds may have to be re‐interpreted. EXPERIMENTAL APPROACH We compared the pharmacological profiles of the commonly used NADPH oxidase inhibitors, diphenylene iodonium (DPI), apocynin and 4‐(2‐amino‐ethyl)‐benzolsulphonyl‐fluoride (AEBSF), as well as the novel triazolo pyrimidine VAS3947. We used several assays for detecting cellular and tissue ROS, as none of them is specific and artefact free. KEY RESULTS DPI abolished NADPH oxidase‐mediated ROS formation, but also inhibited other flavo‐enzymes such as NO synthase (NOS) and xanthine oxidase (XOD). Apocynin interfered with ROS detection and varied considerably in efficacy and potency, as did AEBSF. Conversely, the novel NADPH oxidase inhibitor, VAS3947, consistently inhibited NADPH oxidase activity in low micromolar concentrations, and interfered neither with ROS detection nor with XOD or eNOS activities. VAS3947 attenuated ROS formation in aortas of spontaneously hypertensive rats (SHRs), where NOS or XOD inhibitors were without effect. CONCLUSIONS AND IMPLICATIONS Our data suggest that triazolo pyrimidines such as VAS3947 are specific NADPH oxidase inhibitors, while DPI and apocynin can no longer be recommended. Based on the effects of VAS3947, NADPH oxidases appear to be a major source of ROS in aortas of SHRs.
Recent success in Artificial Intelligence (AI) and Machine Learning (ML) allow problem solving automatically without any human intervention. Autonomous approaches can be very convenient. However, in certain domains, e.g., in the medical domain, it is necessary to enable a domain expert to understand, why an algorithm came up with a certain result. Consequently, the field of Explainable AI (xAI) rapidly gained interest worldwide in various domains, particularly in medicine. Explainable AI studies transparency and traceability of opaque AI/ML and there are already a huge variety of methods. For example with layer-wise relevance propagation relevant parts of inputs to, and representations in, a neural network which caused a result, can be highlighted. This is a first important step to ensure that end users, e.g., medical professionals, assume responsibility for decision making with AI/ML and of interest to professionals and regulators. Interactive ML adds the component of human expertise to AI/ML processes by enabling them to re-enact and retrace AI/ML results, e.g. let them check it for plausibility. This requires new human-AI interfaces for explainable AI. In order to build effective and efficient interactive human-AI interfaces we have to deal with the question of how to evaluate the quality of explanations given by an explainable AI system. In this paper we introduce our System Causability Scale (SCS) to measure the quality of explanations. It is based on our notion of Causability [1] combined with concepts adapted from a widely-accepted usability scale.
Rare diseases (RD) patient registries are powerful instruments that help develop clinical research, facilitate the planning of appropriate clinical trials, improve patient care, and support healthcare management. They constitute a key information system that supports the activities of European Reference Networks (ERNs) on rare diseases. A rapid proliferation of RD registries has occurred during the last years and there is a need to develop guidance for the minimum requirements, recommendations and standards necessary to maintain a high-quality registry. In response to these heterogeneities, in the framework of RD-Connect, a European platform connecting databases, registries, biobanks and clinical bioinformatics for rare disease research, we report on a list of recommendations, developed by a group of experts, including members of patient organizations, to be used as a framework for improving the quality of RD registries. This list includes aspects of governance, Findable, Accessible, Interoperable and Reusable (FAIR) data and information, infrastructure, documentation, training, and quality audit. The list is intended to be used by established as well as new RD registries. Further work includes the development of a toolkit to enable continuous assessment and improvement of their organizational and data quality.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.