“…EXplainable Artificial Intelligence (XAI) aims to provide insights about the decision-making process of AI models and has been increasingly applied to the geosciences (e.g., Toms et al, 2021;Ebert-Uphoff and Hilburn, 2020;Hilburn et al, 2021;Barnes et al, 2019Barnes et al, , 2020Mayer and Barnes, 2021;Keys et al, 2021;Sonnewald and Lguensat, 2021). XAI methods show promising results in calibrating model trust, and assisting in learning new science (see for example, McGovern et al, 2019;Toms et al, 2020;Sonnewald and Lguensat, 2021;Clare et al, 2022;Mamalakis et al, 2022a). A popular subcategory of XAI is the so-called local attribution methods, which compute the attribution of a model's prediction to the input variables (also referred to as "input features").…”