Deep neural networks have achieved great success in many real-world applications, yet it remains unclear and difficult to explain their decision-making process to an enduser. In this paper, we address the explainable AI problem for deep neural networks with our proposed framework, named IASSA, which generates an importance map indicating how salient each pixel is for the models prediction with an iterative and adaptive sampling module. We employ an affinity matrix calculated on multi-level deep learning features to explore long-range pixel-to-pixel correlation, which can shift the saliency values guided by our longrange and parameter-free spatial attention. Extensive experiments on the MS-COCO dataset show that our proposed approach matches or exceeds the performance of state-ofthe-art black-box explanation methods.
Recent advances in artificial intelligence (AI), driven mainly by deep neural networks, have yielded remarkable progress in fields, such as computer vision, natural language processing, and reinforcement learning. Despite these successes, the inability to predict how AI systems will behave "in the wild" impacts almost all stages of planning and deployment, including research and development, verification and validation, and user trust and acceptance. The field of explainable artificial intelligence (XAI) seeks to develop techniques enabling AI algorithms to generate explanations of their results; generally these are human-interpretable representations or visualizations that are meant to "explain" how the system produced its outputs. We introduce the Explainable AI Toolkit (XAITK), a DARPA-sponsored effort that builds on results from the 4-year DARPA XAI program. The XAITK has two goals: (a) to consolidate research results from DARPA XAI into a single publicly accessible repository; and (b) to identify operationally relevant capabilities developed on DARPA XAI and assist in their transition to interested partners. We first describe the XAITK website and associated capabilities. These place the research results from DARPA XAI in the wider context of general research in the field of XAI, and include performer contributions of code, data, publications, and reports. We then describe the XAITK analytics and autonomy software frameworks. These are Python-based frameworks focused on particular XAI domains, and designed to provide a single integration endpoint for multiple algorithm implementations from across DARPA XAI. Each framework generalizes APIs for system-level data and control while providing a plugin interface for existing and future algorithm implementations. The XAITK project can be followed at: https://xaitk.org.
This paper aims at visualizing deep convolutional neural network interpretations for aerial imagery and understanding how these interpretations change across datasets or when network weights are damaged. Our visualization results offer insights on the generalization power and resilience of commonly used networks, such as VGG16, ResNet50, and DenseNet121. Our experiments on the AID and the UCM aerial datasets demonstrate the emergence of object and texture detectors in convolutional networks commonly used for classification. We further analyze these interpretations when the network is trained on one dataset and tested on another to demonstrate the robustness of feature learning across aerial datasets. We also explore the shift in interpretations when performing transfer learning from an aerial dataset (AID) to a generic object dataset (MS-COCO). These results illustrate how transfer learning benefits the network's internal representations. For analyzing the effects of damages on activation maps, our work proposes to simulate damages by randomly zeroing network weights at different levels of the network. The paper carries out experiments with retraining the network to check if it can recover the lost interpretations. Visualizing changes in the neural network's interpretation when the undamaged weights are updated, allows us to assess the resilience of a network visually. Finally, we propose a new metric for the quantitative assessment of network resilience.
Quantifying the value of explanations in a human‐in‐the‐loop (HITL) system is difficult. Previous methods either measure explanation‐specific values that do not correspond to user tasks and needs or poll users on how useful they find the explanations to be. In this work, we quantify how much explanations help the user through a utility‐based paradigm that measures change in task performance when using explanations vs not. Our chosen task is content‐based image retrieval (CBIR), which has well‐established baselines and performance metrics independent of explainability. We extend an existing HITL image retrieval system that incorporates user feedback with similarity‐based saliency maps (SBSM) that indicate to the user which parts of the retrieved images are most similar to the query image. The system helps the user understand what it is paying attention to through saliency maps, and the user helps the system understand their goal through saliency‐guided relevance feedback. Using the MS‐COCO dataset, a standard object detection and segmentation dataset, we conducted extensive, crowd‐sourced experiments validating that SBSM improves interactive image retrieval. Although the performance increase is modest in the general case, in more difficult cases such as cluttered scenes, using explanations yields an 6.5% increase in accuracy. To the best of our knowledge, this is the first large‐scale user study showing that visual saliency map explanations improve performance on a real‐world, interactive task. Our utility‐based evaluation paradigm is general and potentially applicable to any task for which explainability can be incorporated.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.