As deep learning has been successfully deployed in diverse applications, there is an ever increasing need to explain its decision. To explain decisions, case-based reasoning has proved to be effective in many areas. The prototype-based explanation is a method that provides an explanation of the model's prediction using the distance between an input and learned prototypes to effectively perform case-based reasoning. However, existing methods are less reliable because distance is not always consistent with human perception. In this study, we construct a latent space which we call an explanation space with distributional embedding and latent space regularization. This explanation space ensures that similar (in terms of human-interpretable features) images share similar latent representations, and therefore provides a reliable explanation for the consistency between distance-based explanation and human perception. The explanation space also provides additional explanation by transition, allowing the user to understand the factors that affect the distance. Throughout extensive experiments including human evaluation, we have shown that the explanation space provides a more human-understandable explanation.INDEX TERMS explainable AI, trustworthy machine learning, interpretable machine learning.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.