Model-agnostic methods in (XAI) propose isolating the explanation system from the AI model architecture, typically Machine Learning or black-box models. Existing XAI libraries offer a good number of explanation methods, that are reusable for different domains and models, with different choices of parameters. However, it is not clear what would be a good explainer for a given situation, domain, AI model, and user preferences. The choice of a proper explanation method is a complex decision-making process itself. In this paper, we propose applying CBR to support this task by capturing the user preferences about explanation results into a case base. We have defined the corresponding CBR process to help retrieve a suitable explainer from a catalogue made of existing XAI libraries. CBR could help the task of learning from the explanation experiences and will help to retrieve explainers for other similar scenarios.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.